A Ready-Made Slogan?
A few days ago, my 10-year-old son came to me for help. The class representative elections were approaching, and one of his friends had come up with a slogan using ChatGPT. “Can you help me?” he asked. I turned the question back to him, “How do you want to go about it?”
His plan was simple: “I’ll ask ChatGPT to suggest a slogan, and if I don’t like it, I’ll just keep asking until one fits.” He thought he had it all figured out.
I suggested a different approach. I told him to try asking ChatGPT this: “I’m a 10-year-old boy named Olivier, and I want to be class representative. What questions should I ask myself to come up with the perfect slogan?”
Here’s ChatGPT’s response:
This response threw him off a bit… “But these questions are hard!” he exclaimed. To which I replied, “Being a representative is a real responsibility. If you start with empty words, you won’t get far.”
The knowledge illusion
This story brought me back to one of my greatest fears: a world where knowledge gradually loses its depth, becoming more hollow. The illusion of knowledge isn’t new, but AI has become its perfect mirror, amplifying this deceptive confidence—providing answers without thought, certainties without questioning.
A new kind of “knower” is emerging: those who know, those who don’t know, those who know they don’t know… and now, thanks to AI, those who think they know passively yet still lack true understanding. The real question is this: do they even understand what they think they know?
If I were to sketch this in a somewhat caricatural way, it might look something like this…
This dynamic between knowledge and illusion was highlighted in a recent MIT experiment involving three teams of students, all with no experience in an old programming language, Fortran. The first team, assisted by ChatGPT, finished in record time. The second, using Meta’s Code Llama, an LLM tailored for developers, didn’t lag behind either. As for the last team, they relied solely on Google and their own resources, taking a bit more time.
But it was the next phase of the experiment that revealed the real insights. When asked to reproduce the code from memory, the ChatGPT team failed, retaining nothing. Half of those who used Code Llama succeeded, while everyone who relied on Google and their own efforts remembered and could recreate their work.
This experiment sheds light on several key points. First, it reveals the illusion of quick knowledge that AI can offer: without effort, learning remains superficial. It reminds us that only personal, thoughtful work leads to deep understanding. It also underscores the importance of developing AI tools tailored to specific fields—tools that truly help structure thinking rather than replace it. Most importantly, it exposes the potential limitations of these tools: they risk becoming crutches, weakening long-term learning if they don’t encourage critical thinking.
Four Eras of Knowledge
In reality, this experiment fits into a broader context of transformation in the very concept of knowledge.
With a bit of perspective, we can trace this evolution, in simplified terms, through four main stages. First, the pre-industrial era: knowledge was transmitted according to social class. The elite had access to private education and sacred texts, while the working classes relied on oral tradition and family-based learning. Then came industrialization, which democratized education and made basic knowledge accessible to all, though higher education remained reserved for the affluent.
More recently, the information age radically changed the landscape. With the advent of the Internet, knowledge became accessible to everyone. However, this abundance also brought its own challenges: cognitive overload, misinformation, and an often superficial illusion of knowledge.
Today, with generative AI, a new dimension has been added: we are entering an era of delegated knowledge. The paradox is striking: as access to information becomes easier, the depth of understanding seems to be fading. AI accomplishes tasks whose inner workings we don’t always understand, giving us answers without revealing the path that leads to them, thus short-circuiting our own learning process. My son experienced a small version of this reality: an apparent ease that masked a true challenge of reflection, a depth that AI alone cannot offer—unless we engage our critical thinking.
From Convenience to Elevation
So, has knowledge become just another commodity? On the surface, perhaps. But relying on AI’s ready-made answers risks extinguishing that inner fire—that force that sets us apart. We keep hearing that AI should augment us, but the term feels wrong. It’s not augmentation we need, but elevation. AI should guide us toward greater clarity, not by filling our minds with answers, but by sharpening our perspective. For learning is not about accumulating; it’s about transforming.
AI is merely a springboard; the true momentum comes from within us. It resides in our critical thinking, our curiosity, this quiet quest that makes us thinking beings. Genuine knowledge can only be delegated once it is truly understood. It must first be sought, conquered. And if AI is to accompany us with true intelligence, let it do so not by providing answers, but by awakening greater questions within us.
MD
Spot on as always.
This was such an exceptional article!!
Those who learn who to use AI to leverage their thinking rather than just "substitute" it will be the ones that really benefit from this moment of society.