Apple recently published a quietly devastating critique of the AI hype cycle. In a new technical report titled The Illusion of Thinking, its researchers show that even the most advanced Large Reasoning Models (LRMs), like Claude 3.7, DeepSeek-R1 and OpenAI’s o3-mini, aren’t really “thinking.” They’re pattern machines. And beyond a certain threshold of problem complexity, they break.
The study put AI models through a series of structured logic puzzles —like the Tower of Hanoi and River Crossing games— where complexity could be precisely scaled and performance tracked in detail. The results showed three distinct regimes: at low complexity, simpler language models outperformed reasoning-enabled ones; at medium complexity, “thinking” models briefly pulled ahead; and at high complexity, both types collapsed entirely, often failing to produce a single correct solution.
Even more surprisingly, the models actually reduced their reasoning effort as problems got harder, using fewer tokens for more difficult tasks, despite having unused compute capacity. That counterintuitive drop suggests a fundamental limitation: LRMs aren’t reasoning deeper under pressure. They’re just stopping sooner when the pattern doesn’t fit. The illusion, Apple suggests, lies in how we interpret long, plausible outputs as evidence of intelligence.
Good with patterns
What’s really happening, according to the study, is advanced pattern recognition, nothing more. But that insight cuts both ways, since human cognition, too, is largely pattern-based, said Daniel Szabo, AI investor and systems architect, in a Linkedin post. According to the specialist, from Gestalt psychology to modern neuroscience, decades of research support the idea that our own “thinking” is, in many ways, also a hierarchy of learned patterns.
“Cognitive psychology, neuroscience, and decades of research all point to one thing: human thinking is largely pattern recognition. We don’t see an object, we recognize a pattern. We don’t solve problems, we recall and apply patterned solutions. Even language, it’s patterns, all the way down”, Szabo wrote.
So why does AI still feel so artificial? Because while humans can abstract, infer, reflect, and adapt when patterns fail, today’s AI cannot. Apple’s study doesn’t just highlight the limits of machines, it subtly reveals the limits of how we understand intelligence itself. The real illusion of thinking isn’t in the model. It’s in us, projecting understanding onto statistical fluency. Closing that gap won’t come from more tokens or larger models, but from rethinking what “intelligence” truly means.
