Deslop Yourself
For the Common Good
You all know spam, that word born from a cheap can of processed meat. In 1970, the Monty Python crew turned it into a joke: a word repeated so many times it drowned out everything else. The internet, as usual, took the joke literally. Spam became exactly that: a power play where we’re force-fed bland, unwanted content until our minds are saturated.
With the rise of generative AI, the phenomenon has gone into overdrive. Text, images, sound, video - everything can be created with a click. Content without effort, without gaze, without author. People are calling it AI slop, digital mush. More insidious than spam because it looks like quality. But scratch the surface and you can feel something’s off. The fix? Ironically, hiring humans to clean up the mess.
It’s a perfect snapshot of our time: humans build machines to write for them, then invent software to “rehumanize” what the machines produce. We watermark the artificial, then immediately design ways to erase the mark. The cycle repeats relentlessly, a system devouring its own substance, not to live, but merely to survive.
Alongside this inflation comes another threat: algorithmic repetition. Researchers at Oxford call it the curse of recursion: when models are retrained on data they themselves generated, their outputs become flat, predictable, and factually fuzzy. The more the machine feeds on itself, the poorer its creations become.
That pattern scales up into something larger: model collapse. When AIs keep learning from data already filtered or produced by other models, the whole ecosystem starts folding inward. Diversity erodes, errors multiply, nuance disappears. In the end, language itself - the living fabric of human connection - risks losing its pulse.
And this logic of enclosure isn’t just technical. It seeps into our discourse, our imagination, our elites. The latest example? The CEO of Opendoor. In a long tweet, he declared that anyone who doesn’t “think by default through AI” will soon be obsolete. That’s not a vision; it’s a command. It’s the rhetoric of “augmentation” masking a logic of substitution, a managerial worldview where the human becomes a process to optimize. Looking at Opendoor’s stock price on Nasdaq, you might say the “AI-first” prophecy hasn’t exactly found its miracle. Let’s check back in a year and see how strong Wall Street’s faith really is.
That’s exactly why I need to say this again: I do believe in AI. But not the kind that amplifies noise and accelerates sameness. I believe in four principles.
First, education, but designed with discernment. Not to mass-produce “prompt engineers,” but to foster holistic understanding. Take Georgia Tech’s course Art and Generative AI. Inspired by Heidegger’s thinking, it reminds students that technology isn’t just a set of tools, but a way of inhabiting the world. Students study AI models - perceptrons, Hopfield nets, Boltzmann machines, transformers - to grasp their inner logic, limits, and biases. Alongside, they practice charcoal drawing, oil painting, and improvisation: slow, sensory acts that reconnect them with attention and presence.
Here, “soft skills” stop being management buzzwords and recover their original meaning: listening, awareness, discernment. Students even learn to provoke the AI, to work with small datasets, force errors, and play with hallucinations. In short, they learn to think with the machine without thinking like it. That’s the kind of pedagogy we need: a living practice where technical understanding, ethical sensitivity, and creativity are intertwined.
Second, if AI automates, it shouldn’t shrink humans to validators or proofreaders. It should free us, not alienate us. If we gain time, how do we use it? To rebuild meaning? To reconnect with ourselves, with others, and maybe, finally, to resonate with the world again?
Third, I believe in AI as a tool for detection, not duplication; an intelligence that reveals unseen patterns and resonances our human perception might miss.
In other words, AI shouldn’t amplify our production, but expand our perception.
The Armenian Pavilion at the 2025 Venice Biennale shows this beautifully.
Microarchitecture Through AI trained a generative model on historical archives to imagine new forms inspired by ancient monuments, then carved them in stone.
Here, AI doesn’t imitate the past; it extends it. It interprets and projects rather than reproduces.
Imagine applying that approach to other fields - health, education, ecology.
We’d design AI not as a factory of automatism, but as an instrument of exploration: intelligence that reveals connections, opens new possibilities. An AI that enlightens rather than replaces.
Fourth, I believe in an ecology of intelligences. Beneath the generative-AI hype, history will probably see this moment as a transitional phase. Yes, it’s a rupture, but not an endpoint. The future lies in complementary approaches: hybrid architectures, world models, neurosymbolic systems - forms of intelligence that engage with reality instead of just predicting the next word. And above all, in their interaction with other living forms of intelligence - human, biological, collective. Thinking the future means thinking in networks: an ecosystem of intelligences, each with its role, its responsibility, its share of meaning to pass on.
Which brings us to the harder question: what do we do now?
French sociologist Gérald Bronner would say we need critical thinking. Of course he’s right. But in a post-truth, maybe even post-reality world, I often wonder how far that can still go. We can’t stay on high alert forever, not when everything around us already exhausts our attention and frays our mental health.
Maybe instead of scrutinizing everything, we need to look differently. To cultivate what astrophysicist and philosopher Aurélien Barrau calls the poetic spirit. Not “poetic” as decoration, but as a stance, a way of seeing that opens cracks in the predictable.
To live poetically in the world is also to let the world live in you, to stay porous to the subtle, open to what escapes measure. Barrau calls it a “de-enclaving of perception”: finding again the interstices, the edges, the unmapped zones of the real.
From that attention, something new might emerge: a renewed sense of connection, of the common good. In the end, to deslop is not a moral stance, but a practice of awareness. For the common good, yes - but above all, to rediscover the sense of the common itself.
MD




This is so good: “deslop is not a moral stance, but a practice of awareness.”