I just finished watching the biopic about Charles Aznavour, the iconic French-Armenian singer. The movie left its mark. There were a few moments where the pace lagged, but they quickly dissolved as I became fully immersed in thought. One scene, in particular, lingers—Aznavour, after performing to half-empty halls, returns to Paris for a concert that might be his last. The curtain falls; he stands there, frozen, unsure if he should step back onto the stage. “Do you hear the sound of the folding chairs? I know it all too well… people leaving.” But the noise continues. The curtains part again: it’s not people leaving—it’s a standing ovation. The audience is there, every heartbeat resonating with him.
This is kando, a Japanese word describing a wave of light, a moment of pure, unexpected beauty that touches the soul. There’s Aznavour, suspended on stage, moved by this shared thrill. A bond, a recognition of the other, almost mystical. In Sanskrit, it’s called kama muta, that profound stirring born of human connection, of intimate unity. A moment that exists only for those who can feel it, who live it.
So, let’s ask: where does this thrill reside when it comes to artificial intelligence? Where AI detects signals, humans feel them; where it imitates emotions, humans embody them. Even we, as humans, struggle to grasp these fleeting, unique, intensely personal emotions. This is the boundary—not in what AI can do, but in what it cannot be. Our emotions are rooted in centuries of rituals and human connections, while AI remains confined within its framework of optimization. When it glitches, hallucinates, or produces bizarre errors, these are simply the byproducts of cold, statistical calculations.
The Paradox of (De)Humanization
Here’s where the paradox becomes unmistakable. Imitation is always a pale copy, stripped of the essence it tries to replicate. At times, it even borders on the absurd. Take this example: Runway, a company developing AI tools for visual creation, also uses AI in its communication. Nothing unusual—except that, to make its emails feel “more human,” engineers decided to add… a random typo. A well-meaning but misguided attempt to mimic human imperfection.
But this imitation game becomes a deception when it turns to “emotional” AI. A recent study, Feels Like Empathy: How 'Emotional' AI Challenges Human Essence, exposes a troubling paradox: in trying to make AI empathetic by attributing it with qualities like consciousness and morality, we strip these values of their true meaning. This (de)humanization paradox serves as a warning: by anthropomorphizing machines, we risk reducing human empathy to a series of predictable, soulless responses.
This approach contradicts what Kant called a fundamental principle: treating humanity as an end in itself, never as a means. Gianpiero Petriglieri summarizes it well: “The problem won’t be the machines that come, but the machines we become.”
Exploring Our Humanity
And yet, generative AI unveils a new horizon. Where early AI systems structured the rational world, today’s models dive into the informal—texts, poetry, music, images—touching the most human fragments of our existence. If the first generation of AI grasped the explainable, generative AI now reaches for the ungraspable, for the human soul, in a way, pushing us toward a new kind of introspection.
But it’s not just about AI. The convergence of neurotechnology, robotics, and soon quantum technologies is interwoven to offer us what we crave most—time. What will we do with this time gained? Will we use it to explore our humanity, or waste it on distractions, fleeing that inner confrontation Pascal already warned us about?
In this quest for simulated perfection and artificial empathy, are we merely searching for a mirror image of ourselves, even if it risks our own self-loss? Or are we yearning for an authentic resonance that brings us closer to our true essence?
Large Language ... Nuances?
Experiences of "resonance" with AI are beginning to emerge—instances where it captures aspects of our inner world. At Union Square Ventures, a U.S. venture capital firm, music plays continuously. Team members freely add tracks, creating a shared playlist. A GPT model observes this selection to gauge the collective mood of the day. Here, AI perceives something unexpected: a reflection of the shared unconscious, an invisible atmosphere revealed through everyone's musical choices.
However, this remains anecdotal compared to what's on the horizon. Today, we have models designed to evolve, both in architecture and interface—models capable of becoming more refined, granular, and personalized. Imagine an AI that sharpens its understanding, perceives our nuances, and adapts to better grasp our full complexity.
Recent research, like the Talker-Reasoner architecture, aims precisely at this goal. Inspired by the book Thinking, Fast and Slow by psychologist and economist Daniel Kahneman, this approach divides AI's abilities into two distinct "parts": the Talker and the Reasoner.
The Talker is designed to respond quickly and intuitively, much like a friend in light conversation. It captures our immediate questions and responds fluidly, without lingering on complex analysis. Alongside it, the Reasoner operates differently: it takes more time to reflect and offers deeper responses based on detailed analysis. In the context of sleep advice, for example, the Talker might pick up a spontaneous concern from the user—"I feel tired every morning." The Reasoner, meanwhile, would delve deeper by studying sleep habits to suggest a structured improvement plan.
When AI Enlightens Humanity
AI could become an unexpected ally—a mirror reflecting our complexity and a bridge to hidden truths.
In an era where professional dissatisfaction takes on countless forms, the workplace, where we spend so much of our lives, has become a source of deep, unnamed fatigue for many. New realities have emerged: bore-out, the endless boredom between pointless meetings; brown-out, the emptiness of meaningless tasks; and career cushioning, the quiet withdrawal before a final goodbye. The Great Resignation, seen as a call for renewal, often led many back to the same desks, facing the same lingering emptiness, now dubbed workplace loneliness.
What if these struggles are reflections of an inner darkness—a lack of clarity about our own essence, about talents and aspirations left unexplored? This emptiness may well stem from untapped potential, from a calling that remains unspoken.
People don’t buy products; they seek a better version of themselves. Perhaps this is technology’s ultimate role: to help us grow into Homo universalis, beings who, aware of their potential, transcend limitations, connect deeply with others, and embrace the world’s complexity.
In this transformation, AI might finally find its true purpose—not by trying to imitate humanity or replace it, as some fear, but by becoming a psychological ally, a force that helps us reach our best selves and, at times, reveals parts of who we never imagined we could be.
Because the truly formidable, you see, is who we can be.
MD