The big names in artificial intelligence—leaders at OpenAI, Anthropic, Google and others—still confidently predict that AI attaining human-level smarts is right around the corner. But the naysayers are growing in number and volume. AI, they say, just doesn’t think like us.
The work of these researchers suggests there’s something fundamentally limiting about the underlying architecture of today’s AI models. Today’s AIs are able to simulate intelligence by, in essence, learning an enormous number of rules of thumb, which they selectively apply to all the information they encounter.
This contrasts with the many ways that humans and even animals are able to reason about the world, and predict the future. We biological beings build “world models” of how things work, which include cause and effect.
There was no visibility into how they produced the results they did, because they were trained rather than programmed, and the vast number of parameters that comprised their artificial “brains” encoded information and logic in ways that were inscrutable to their creators. But researchers are developing new tools that allow them to look inside these models. The results leave many questioning the conclusion that they are anywhere close to AGI.
‘Bag of heuristics’
Vafa’s own research was an effort to see what kind of mental map an AI builds when it’s trained on millions of turn-by-turn directions like what you would see on Google Maps. Vafa and his colleagues used as source material Manhattan’s dense network of streets and avenues.
The result did not look anything like a street map of Manhattan. Close inspection revealed the AI had inferred all kinds of impossible maneuvers—routes that leapt over Central Park, or traveled diagonally for many blocks. Yet the resulting model managed to give usable turn-by-turn directions between any two points in the borough with 99% accuracy.
Even though its topsy-turvy map would drive any motorist mad, the model had essentially learned separate rules for navigating in a multitude of situations, from every possible starting point, Vafa says.
The vast “brains” of AIs, paired with unprecedented processing power, allow them to learn how to solve problems in a messy way which would be impossible for a person.
Thinking or memorizing?
Other research looks at the peculiarities that arise when large language models try to do math, something they’re historically bad at doing, but are getting better at. Some studies show that models learn a separate set of rules for multiplying numbers in a certain range—say, from 200 to 210—than they use for multiplying numbers in some other range. If you think that’s a less than ideal way to do math, you’re right.
All of this work suggests that under the hood, today’s AIs are overly complicated, patched-together Rube Goldberg machines full of ad-hoc solutions for answering our prompts. Understanding that these systems are long lists of cobbled-together rules of thumb could go a long way to explaining why they struggle when they’re asked to do things even a little bit outside their training, says Vafa. When his team blocked just 1% of the virtual Manhattan’s roads, forcing the AI to navigate around detours, its performance plummeted.
This illustrates a big difference between today’s AIs and people, he adds. A person might not be able to recite turn-by-turn directions around New York City with 99% accuracy, but they’d be mentally flexible enough to avoid a bit of roadwork.
This research also suggests why many models are so massive: They have to memorize an endless list of rules of thumb, and can’t compress that knowledge into a mental model like a person can. It might also help explain why they have to learn on such enormous amounts of data, where a person can pick something up after just a few trials: To derive all those individual rules of thumb, they have to see every possible combination of words, images, game-board positions and the like. And to really train them well, they need to see those combinations over and over.
AI researchers have gotten ahead of themselves before. In 1970, Massachusetts Institute of Technology professor Marvin Minsky told Life magazine that a computer would have the intelligence of an average human being in “three to eight years.”
Copyright ©2025 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8
Appeared in the April 26, 2025, print edition as 'We Now Know How AI ‘Thinks.’ It Isn’t Thinking at All.'.
If you often open multiple tabs and struggle to keep track of them, Tabs Reminder is the solution you need. Tabs Reminder lets you set reminders for tabs so you can close them and get notified about them later. Never lose track of important tabs again with Tabs Reminder!
Try our Chrome extension today!
Share this article with your
friends and colleagues.
Earn points from views and
referrals who sign up.
Learn more