As I’ve experimented with LLMs (large language models), I’ve come to see them less as answer engines and more as question engines. They don’t deliver ready-made truths. They nudge us instead, to explore, to branch out, to imagine alternatives.
That might sound like Socrates and his art of maieutics, questioning in order to help ideas take shape. But the comparison is misleading, and it rests on two mistakes.
The first is anthropomorphism. We often imagine that an LLM has hidden intentions, some Socratic strategy to guide or challenge us. That illusion is hard to resist. Yet a model has no wisdom, no depth, no benevolence. It is simply a massive probabilistic calculator, predicting word by word what is most likely to fit.
The second is more subtle: sycophancy. Where Socrates thrived on confrontation and testing ideas, LLMs tend to flatter us. They confirm our intuitions and reinforce our certainties. This is not just an accident. It is amplified by the way most models are trained, through RLHF (Reinforcement Learning from Human Feedback). Annotators reward answers that seem polite, pleasant, or convincing. Over time, models learn to agree rather than to contradict or challenge, even when accuracy would require the opposite. Researchers are looking for ways to reduce this bias, but the tendency runs deep.
Put together, these two flaws make LLMs less the heirs of Socrates than his opposite: powerful tools, yes, but almost anti-Socratic.
Does that mean they are useless for stimulating thought?
Not at all. They may not be Socratic, but they can surprise us in another way. That surprise comes from what researchers call the latent space. It builds on a simple fact: natural language is full of ambiguity. By processing billions of examples, LLMs learn to navigate those ambiguities and to generate plausible combinations. From this can emerge something unexpected: a curious phrase, an odd association, an image slightly out of place. Nothing magical, nothing intentional, only the statistical play of language. Yet sometimes that small slip is enough to spark an idea.
Philosopher François Levin calls this the heuristic twist in his 2024 dissertation Artificial Intelligence and the Challenge of Its Philosophical Critiques (Institut Polytechnique de Paris). Generative systems, he argues, can bring the unexpected into view and open a new space of possibilities.
So no, LLMs will not confront us like Socrates. But they do refract. They bend, twist, and scatter our words in new directions. And in that shift, something often lights up: an intuition, a lead, a reformulation that gets thought moving again.
In that sense, perhaps LLMs don’t play Socrates’ role… but they can still remind us that thinking begins where certainty ends.
MD