I was on vacation this week, paddleboarding on the sea. The weather was beautiful. Everything felt peaceful.
Then, a large white feather landed at my feet. And it stopped me. Truly. I didn’t know why. I just stood there, looking at it for a long time.
I could have moved on. I often do. These quiet stirrings come and go. They hover at the edge of awareness, close enough to touch me, never clear enough to grasp. Trying to make sense of them often feels complicated. Too much weight for too little clarity.
But this time, back on solid ground, I opened ChatGPT. I typed, “A feather fell on me. It stopped me. Do you know what that could mean?” It responded with stories of feathers in spiritual traditions, signs, messages from loved ones. And then it struck me: that day was my mother’s birthday. She’s been gone for a long time.
Was it chance? A coincidence? Maybe. But that isn’t really the point. What matters is not the event itself, but the connection we choose to create with it. The meaning we allow it to hold.
I believe this deeply: absolute clarity can unravel us. What saves us are the quiet adjustments we make to reality. The gentle agreements that bend the world just enough to make it livable.
I call these moments the break in the flow.
You walk, caught in the ordinary flow of life. And suddenly, something stops you. A word. A gesture. A detail. Almost nothing. But it lingers. A vibration that whispers, “Wait.”
It’s in moments like these that something begins to shift. Because thinking isn’t always about solving. Sometimes, it’s simply welcoming a sensation. Letting it rise. Listening to it until it turns into a question.
It may be no coincidence that in French, penser - to think - echoes panser - to tend or to heal. To tend to reality is to offer it a little care. A little presence.
And that’s precisely what seems to be missing in many of today’s conversations about AI. We talk about it a lot, but often without nuance, without that subtle attention to what’s actually there.
You may have come across that MIT study: 54 participants, divided into groups, asked to write essays with or without help from ChatGPT. The result? Those who used AI showed lower brain activity, weaker memory, and less original output. And immediately, the headlines followed: “ChatGPT can rot your brain”
But if you really read the study, things start to make sense. Small sample. School-style task. Data not yet peer-reviewed. And above all: the study doesn’t say ChatGPT makes you dumb. It shows that if you delegate, you think less. As if that needed a study…
There’s more. In the final session, some participants rewrote an essay on a topic they had already explored without AI. Their thoughts were already formed. Meanwhile, others, who had written their first version with ChatGPT, were now writing without it, in a sense reflecting on the topic for the first time. Comparing their brains is comparing a second thought to a first. It’s not a test of intelligence. It’s a methodological bias.
Another blind spot: the study relies on EEG, which picks up surface-level brain activity and interprets any drop as disengagement. But the brain isn’t a gym. Less activity doesn’t mean less thinking. What matters is the quality of the connections, not just their intensity. And that, EEG doesn’t show.
All in all, it’s striking how often these conversations lack nuance.
Yes, if you hand over an entire essay to a tool, the result is bound to be thinner. Something gets lost. But if you use it to give voice to a sensation, to explore a vague intuition, to shape something that was still unspoken, then no, it’s not a loss. It’s the opening of a different way of thinking.
And that way of thinking is well documented.
Cognitive science shows us that the brain perceives long before it understands. It senses, filters, and responds beneath the surface of conscious awareness. This is what we call salience, bottom-up attention, or unconscious adaptive processing.
So maybe, in trying to control, classify, and measure everything, we forget something simple. Thinking often begins with feeling.
And some tools are beginning to explore that space: the invisible, the unspoken, the felt.
A friend recently mentioned something called Mind Reasoner. I haven’t tried it yet, but the idea stayed with me. A tool designed to listen between the lines. To analyze what’s present in a message but never quite said. What circulates just beneath the surface.
And maybe that’s one of AI’s deeper roles. Not to explain away what we feel, but to give it a language.
Which is why I don’t believe we’ll think less. I believe we’ll think differently. Maybe we’re beginning to unlock ways of thinking that are more instinctive, more sensitive, more fragmented too, but no less meaningful. No less alive.
Maybe we’re beginning to reconnect two languages we’ve kept apart for too long: intuition and reason.
And that kind of path, no prompt can trace it for us. It begins quietly. With something small. A feather. A pause. And a voice within that simply says: “Wait. Listen. Feel.”
MD