top of page
Search

THE ILLUSION OF THINKING AND ALTERNATE FACTS

Apple caused a tidal wave of “I told you so!” LinkedIn-articles when they released the study “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity”. It became widely cited, rightly or wrongly, as proof that today’s LLMs fundamentally lack scalable reasoning ability.

I must admit I was going to write a commentary on it too, but a while back I made the arbitrary decision to only write one of these articles a week, so I was too late. A rebuttal had already been released, named “The Illusion of the Illusion of Thinking.”

I love this! I’m looking forward to the rebuttal to the rebuttal with the unwieldy name “The Illusion of the Illusion of the Illusion of Thinking.”

So what is the point they are making? Apple’s article suggested severe limitation to the in the reasoning engines of the current LLMs and that they collapse when faced with solving lengthy puzzles, such as Tower of Hanoi and River crossing. The rebuttal points out that some of the conclusions were not a result of lack of reasoning ability, but resource constrains and poorly framed metrics.

This leads me invariably to one of my favourite topics. If we don’t appropriately define what we are measuring, then anything can be proven/disproven. Anything can be true or false. Sam Altman wrote in January 2025 in his blog that “We are now confident we know how to build AGI as we have traditionally understood it.”

Really? As we “traditionally understood it”? What does that even mean?

In Sam Altman’s mind AGI is an already solved problem and of course it is. Without definition, anything is true and everything is false. Reading his most recent blog post, he states: “OpenAI is a lot of things now, but before anything else, we are a superintelligence research company. We have a lot of work in front of us, but most of the path in front of us is now lit, and the dark areas are receding fast.”

So now ASI is also solved and pretty much around the corner. But again, without an actual definition…

And when these leaders attempt to define a word or a concept, it becomes very self-serving. I’m reminded of Anthropic CEO Dario Amodei and his recent statements about LLMs and hallucinations. He said in an interview that: “If you define hallucination as confidently saying something that's wrong, humans do that a lot.”

No one defines hallucinations as “confidently saying something that’s wrong”. No one. Generally we just call that lying and that can easily be done with no hallucinations at all.

And we are back to my original point. When we redefine words to suit our own purpose, we are creating a form of alternate facts that mean nothing.

 

References:



 
 
 

Comments


© 2024 by Mikael Svanström
bottom of page