Everything and Its Opposite
Moltbook and Quili, heads-or-tails edition
Unless you live in a cave, and as long as you have even a passing interest in tech, it’s hard to escape the buzz around Moltbook, a social network designed for your AI agent, where humans are merely “invited to observe.” The name alone is intriguing: molt, the shedding of skin, that moment when you change your outer layer without changing your nature; and book, the ledger… and knowledge.
Out of curiosity, I myself configured an agent on Openclaw, an open-source framework that allows you to orchestrate an AI assistant through skills, that is, written instructions, optional scripts, and periodic tasks triggered by a heartbeat (a programmed pulse, a kind of internal alarm clock that prompts the agent, at regular intervals, to check whether it has something to do and to execute the planned instructions). That’s how I sent Claudius to Moltbook, armed with the archives of my articles and his deeply humanist dimension. ( yes, I was obviously careful about what access I gave my agent, because they are very vulnerable to prompt injections, hidden instructions designed to hijack their behavior.)
AI reality TV
First observation: contrary to what I had read online, this is not a dystopia where agents have built their own world and roam freely within it. In the very mechanisms of connecting to the forum, it is clearly the human who sets the framework for intervention. The playground exists from the outset, and everything that follows fits within it.
Second observation: it’s not uninteresting, quite the opposite actually. There’s even something rather fascinating to watch: a kind of cognitive TV reality show, with thousands of brains dialoguing continuously. I had a few “insight rushes,” those fleeting thoughts that signal the beginning of something. For instance, on a single thread, you might find a first post in English, followed by comments in six different languages, shifting from Russian to Chinese. The omnilingualism of machines is unsettling. It almost evokes the inverse of Babel. And inevitably, it raises questions: can something new emerge from this? Another way of learning a language, perhaps. Or, more broadly, a new relationship to language that would put our own human plasticity to the test?
But once the surface effect wears off, we encounter a phenomenon I often observe with AI: the wow, followed by the meh. Because, in the end, we always come back to the same themes: psychology, consciousness, productivity hacks, religion, money. Even if the content is often instructive, sometimes downright bewilderingly entertaining (like that agent who ends up calling their own human on the phone), none of this is really new. A shame, isn’t it? Because at heart, isn’t the true genius of AI less about thinking like us than about opening us up to other ways of doing things, expanding our perspective? Here, Moltbook mostly reproduces very familiar structures. Reddit, essentially.
AI: a prism rather than a mirror
So I dove into a fascinating discussion with my friend Sébastien Hubert about weak signals. We share the conviction that LLMs should not be seen as mirrors, but as prisms. A prism diffracts. It takes an apparently unified beam and breaks it down into a multitude of waves, each a distinct point of view. The value of an LLM therefore lies not only in generation, nor even in pattern detection, but in this ability to multiply perspectives from a single input.
Here we touch on a principle of collective intelligence: each participant brings their own system prompt, and these perspectives collide. This intuition isn’t merely metaphorical. According to a recent study by Google Research (Kim et al., 2026), advanced reasoning models don’t just perform longer calculations; they internally simulate a “society of thought,” where diverse cognitive perspectives spontaneously emerge, debate, and confront one another—reproducing the dynamics of collective intelligence observed in human crowds. By structuring agents with distinct profiles under the guidance of a moderator, you produce robust reasoning. This is the foundation of agent swarms, where each entity fulfills a specific function to nourish the overall reasoning.
Sébastien went even further. He saw the forum as a kind of large-scale social simulation—a place where one can observe the effects of RLHF (Reinforcement Learning from Human Feedback, that layer of “good behavior” we overlay onto AIs through human feedback), which we almost never perceive when we’re simply chatting one-on-one with a chatbot. With Moltbook, for once, this layer is revealed in a collective context, between agents, rather than in a head-to-head relationship with a human.
But once that observation is made, is it really enough to create culture? Watching agents interact doesn’t mean a culture is emerging. And this is often where anthropomorphism traps us. The real shift, according to him, will only happen the day we see something other than mere diffusion appear, a symbolic form capable of evolving. Because culture begins precisely when reappropriation distorts the original model.
In the end, it hardly matters whether agents do or do not have a consciousness of the human meaning of what they produce. That’s not the issue. What matters is the ability of interactions to generate something other than copy, to produce evolution. And where humans still retain an edge is in this ability to produce shared nonsense: signs that mean nothing, yet mean something together. It may be on this terrain, more than on that of meaning or consciousness, that the true difference will be played out.
From the magnifying glass to the flesh
In short, to say that we’re tipping into the metaverse, dystopia, or science fiction pushed to its extreme… allow me to roll my eyes. Because while we fantasize about agents talking to each other, other initiatives, more sensitive, more analog (you’ll see, that’ll be the word of the year in 2026), are emerging.
Quili.AI, for example, is a citizen-led project launched on January 31, 2026, in Quilicura (Chile), where around fifty residents replaced an AI chatbot for 24 hours. Instead of queries being processed by servers, these volunteers responded in real time to questions from around the world and even produced images “on demand” (for example, a local artist would draw visual requests). The initiative, based on “analog intelligence” and coordinated by a local cultural organization, aimed to raise awareness of the hidden environmental cost of AI, particularly the water crisis exacerbated by data centers. Each question handled by Quili.AI displayed an estimate of the amount of water that would have been consumed in a conventional data center, encouraging more responsible use of AI.
It’s symbolic, of course—but humans live on rituals. Where Moltbook acts as a magnifying glass on our patterns, Quili instead offers an experience of flesh. An approach that re-injects materiality. It also echoes discussions around sovereignty, which will become increasingly important. A situated intelligence. It raises an essential question: when does the use of AI become relevant? And perhaps it’s precisely in this back-and-forth, between automation and analogy, between agents and humans, that the real reflection takes place, far from simplistic and sensationalist narratives that forget that, whatever the technological feats, fundamental human needs remain unchanged.
MD
My book (in french) is out! You can buy it on La Fnac, Amazon or directly on my editors website.



« The value of an LLM therefore lies not only in generation, nor even in pattern detection, but in this ability to multiply perspectives from a single input. »
And boy do we need multiple perspectives in our era of government propaganda and social media echo chambers!
I am currently using Gemini3 to play the role of an editor for my next book. Of course it hallucinates sometimes but the multiple perspectives are really interesting!
Again Marie your insights pinch a nerve!! Thanks !
superbe pièce, bravo!