“Shapes don’t always make (great) figures.” Nice line, right? That was supposed to be my opener. But the more I sat with it… the more it felt too easy. Too packaged. Like the TEDx-ification of my own thought.
Because the moment we hear something like that, we start searching. We want to understand. Sort. Label. Organize. That’s how we were raised: in boxes, with edges, inside a well-outlined world. And even if the edges eventually softened, they simply became a quieter way to contain us.
When my kids were little, they played with one of those Montessori shape sorting box. Triangle into triangle. Circle into circle...
My daughter, the eldest, got it right away: One shape, one place. She sorted, adjusted. Everything fit. Neat. Square. (Literally.)
Then came my son. Two years younger, same toy. He watched for a second, lifted the lid… and tossed everything in. Problem solved.
Impulsive? Maybe. But looking back, I’d say: clear-eyed. He understood the game — and saw straight through the mechanics. Why follow the rules… when two simple moves can beat the whole system?
For most people, that’s what “thinking outside the box” means. It’s what we’ve been told for years: Think different. Think outside the box. Jobs & co. But even that — thinking outside the box — still means you're thinking about the box.
But my son wasn’t outside the box. He didn’t reject the rule — he sidestepped it. He didn’t break it — he bypassed it. And in the end, everything still ended up inside the box.
And maybe that’s exactly what’s happening right now — on an entirely different scale — with generative AI.
LLMs operate within a frame: the boundaries of their training data. And we question them from within our own frame: our culture, our language, our biases. The more critical, cultured, or expert we are, the further we’re able to push them.
But two things make them truly unique:
First — you don’t need to start with expertise to learn. As long as you’re willing to take things apart, test, question, challenge. What LLMs enable is a kind of cognitive reverse engineering — in real time.
You don’t have to master the frame to enter it. You can ask the AI to take you there. And if you’ve been paying attention, you might’ve noticed: more and more, LLMs starts asking you questions — Questions that, in turn, make you question yourself.
It’s no longer about stacking knowledge. It’s about unfolding thought. A tool that teaches you how to use it… while you’re using it.
And perhaps most significantly: for the first time, our tools are no longer just extensions of ourselves. They create the illusion of thinking like we do — not biologically, but in their movement. They don’t replay. They recombine. Like our brains, which don’t retrieve memories — they rebuild them.
You see where I’m going? This isn’t a continuation. It’s a shift. The frame dissolves. It becomes space — open, navigable, in motion.
And maybe our brains are round for a reason: to let thought spiral, reverse, circle back, bounce off itself. Connected to these LLMs — compressed archives of our humanity — something entirely new is unfolding.
Two circles begin to converge: human thought and synthetic thought. And at the point where they meet — a motion. A symbol. Infinity.

It’s not just AI that’s reshaping the frames we work within. It’s also reshaping the framers themselves.
In French, the word cadre is beautifully layered — it means both a “frame” and to some extent “manager.” We don’t really have the same semantic overlap in English. But somehow, between box and boss, the tension still exists.
For a long time, les cadres — the managers — had a clear role: They were the intermediaries between levels, the filters, the gatekeepers. Today, they’re becoming... less central. Disintermediated. Not yet obsolete — but already bypassed.
Because in some way, we’re all becoming cadres now — just without the title.
Not by status, but by stance. We pilot systems. We orchestrate flows. We interact with AIs we don’t fully understand — and don’t fully control.
We’re no longer managers. We’re becoming mediators of algorithms.
Competence isn’t just in what we execute — it’s in what we enable. Through prompting, guiding, questioning. That’s what I call outskills.
And with that shift, something new is emerging — especially among Gen Z: Conscious unbossing. A clear refusal of imposed roles, hollow titles, and rigid hierarchies. Not just a rejection of authority — but of the mental load that comes with it. The silent pressure to embody status.
It’s not about stepping back. It’s about stepping aside — so power can move differently. Not upward. But outward. It flows. It circulates. It collaborates. It aligns.
A word that echoes unboxing — but this time, we’re not unboxing a product —we’re unframing a position.
And maybe one player saw it coming. A giant who moved from Face to Meta. From the face… to what lies beyond. From a blue square to a horizontal infinity.
A symbol that seems to suggest: a boundary is no longer a limit — it’s a passage.

Yesterday, we were users of systems. Today, we are the interfaces between systems. And tomorrow, maybe — we’ll simply be the systems themselves.
Godard once said, speaking of light on a film set: You can’t fight the sun.
So maybe there’s no conclusion to write. No final word to land on. Just a light, passing through.
MD
I wrote this piece alone.
But the thinking came through many voices.
Like AI does?
Ah Marie! Je deviens rapidement un fan fini!!! Super réflexion! Un grand merci!