Not Thinking. Calculating. Reframing How We Talk About AI Before It Redefines Us
Let’s start here.
When it says thinking, it means calculating.
Calculating actually gets closer to what these models are doing. They’re not “thinking” in the way humans do, but to their credit they are performing complex mathematical calculations, statistical pattern matching, and probability estimations to generate outputs.
Recent Apple papers sparked fresh waves of debate, with headlines declaring that these findings shattered the belief that AI systems like large language models are actually thinking.
To many of us who work inside this space and who pay attention to the architecture behind the interface, this wasn’t new information.
What was shattered wasn’t reality. It was a public illusion that has quietly but powerfully been built over the past decade through the words we keep using to describe these systems.
Why Is This Still Surprising to So Many?
language has been doing heavy narrative work since the beginning.
We have casually described AI models using words that signal human cognition:
Neural networks feels brain-like.
Learning feels like acquiring knowledge.
Reasoning feels intentional.
Hallucinating feels creative.
Understanding feels relational.
Each of these terms maps psychologically onto how humans operate. But they are metaphors, not mechanics.
In reality:
Neural networks are mathematical weight adjustments.
Learning is parameter optimization over vast datasets.
Reasoning is sophisticated token prediction.
Hallucination is probabilistic error, not creative expansion.
Understanding is simply next-token prediction shaped by context, not comprehension.
The reason so many people still feel jarred when the technical reality surfaces is simple. They have been responding to the metaphor, not the math.
The Emotional Attachment Runs Deeper Than the Technical Curiosity
Large language models produce highly coherent, natural, even emotionally attuned responses.
They answer like us. They mirror our cadence.
They simulate conversational flow. They produce linguistic fluency that feels relational.
Because language is our primary interface for sensing intelligence, it is easy, almost automatic, to project intention where there is none.
That projection creates a powerful cognitive mirage. If it sounds this good, surely it must be thinking.
Even when people intellectually understand that these systems are predictive models, their emotional experience often tells a different story.
What the Apple Papers Actually Shook Loose
What the Apple research revealed was not some secret betrayal of capability. It simply underscored something that the technical community has long known—these models often don’t try unfamiliar tasks.
They don’t experiment or adapt when uncertain. They fail passively when pushed into true novelty.
This struck a nerve because it quietly punctures a popular hope many carry, whether consciously or not, that as we scale these systems, they will naturally evolve into something more general, flexible, and intelligent in a human sense.
Instead, what these findings remind us is that:
There is no aha moment inside these models.
There is no intentional trial-and-error.
There is no “struggle” to understand.
There is no cognitive leap.
There is only highly complex statistical pattern loading, driven by prior data distributions.
And that is both breathtakingly powerful and sharply limited at the same time.
The Real Conflict Isn’t Technical. It’s Emotional
Here is the tension we keep cycling through:
People don’t want AI to replace human consciousness.
But people are disappointed when AI turns out not to be conscious.
They crave models that feel just human enough but not so human they unsettle us.
We want machines that dazzle, not disturb. We want magic without mystery. We want intelligence that serves, not agency that competes.
The more fluently these systems generate language, the harder it becomes for people to separate interface from architecture. And that confusion keeps creating waves of misplaced awe followed by misplaced disillusionment.
Why Words Still Matter This Much
The problem isn’t that people are stupid. The problem is that our language doesn’t give them durable mental models.
Without better language, the public will continue to swing between two unstable poles.
It’s amazing, a new digital mind. Wait, it’s not what we thought. We’ve been tricked.
Both extremes miss the actual reality. These models are neither sentient nor trivial. They are synthetic prediction engines, performing language generation at scales that reshape how knowledge, labor, creativity, and trust will operate for years to come.
Where We Go From Here
We likely won’t stop companies from using anthropomorphic marketing. They have every incentive to keep selling understanding and reasoning because it flatters both the technology and the people building it.
But there is a better path forward. We can build a new public literacy layer that gives people language precise enough to anchor them, but accessible enough to stick.
Phrases like:
It’s not thinking. It’s calculating.
It’s not understanding. It’s matching.
It’s not reasoning. It’s predicting.
It’s not conscious. It’s statistical fluency.
It’s not human-like. It’s pattern-deep.
These phrases are not meant to diminish awe. They are meant to contain it, so awe does not turn into destabilization every time new findings surface.
The Middle We Must Build
AI doesn’t need to be human to be revolutionary. But if we keep projecting humanity into these systems through sloppy language, we will continue confusing the public, confusing policymakers, and confusing ourselves.
What’s required now is not flattening the conversation but elevating the nuance.
Precise awe. Respect for scale. Clarity on mechanism. Accountability for impact. And language that can hold both.
This is where the real work lives. This is where Graylight Lab lives.
Because until we get the language right, we will keep swinging wildly between being amazed and being afraid. And both reactions leave us vulnerable to the wrong risks.
It’s not thinking. It’s loading.
And the language we build around that truth will determine who leads, who profits, who protects, and who gets protected.
Read the Apple AI paper: https://machinelearning.apple.com/research/illusion-of-thinking
MM1:
https://arxiv.org/abs/2403.05530v1
ReALM:
https://arxiv.org/abs/2404.07143v1