Close Menu
  • Home
  • United States
  • World
  • Politics
  • Business
  • Lifestyle
  • Entertainment
  • Health
  • Science
  • Tech
  • Sports
  • More
    • Web Stories
    • Editor’s Picks
    • Press Release

Subscribe to Updates

Get the latest USA news and updates directly to your inbox.

What's On
As a Petite Fashion Girly, These 11 Lightweight Jackets Are the Unsung Heroes of my Spring Wardrobe (Without Looking Slouchy!)

As a Petite Fashion Girly, These 11 Lightweight Jackets Are the Unsung Heroes of my Spring Wardrobe (Without Looking Slouchy!)

April 2, 2026
Laura Rutledge opens up on ‘Monday Night Football’ anxiety after big ESPN promotion

Laura Rutledge opens up on ‘Monday Night Football’ anxiety after big ESPN promotion

April 2, 2026
Chinese satellite with robotic ‘octopus arm’ passes key refueling test in orbit — making longer-lived space assets more likely

Chinese satellite with robotic ‘octopus arm’ passes key refueling test in orbit — making longer-lived space assets more likely

April 2, 2026
Facebook X (Twitter) Instagram
Trending
  • As a Petite Fashion Girly, These 11 Lightweight Jackets Are the Unsung Heroes of my Spring Wardrobe (Without Looking Slouchy!)
  • Laura Rutledge opens up on ‘Monday Night Football’ anxiety after big ESPN promotion
  • Chinese satellite with robotic ‘octopus arm’ passes key refueling test in orbit — making longer-lived space assets more likely
  • FDA approves new daily GLP-1 pill — what to know, including how they compare to Wegovy tablets
  • Lead-tainted dinosaur chicken nuggets sold at Walmart spark nationwide health alert
  • Trump tells Pam Bondi her time is running out as attorney general
  • Cotton, Schumer bill would ban Chinese robots from federal agencies
  • 19 Viral Spring 2026 Fashion Essentials That Will Sell Out Fast on Amazon
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Join Us
USA TimesUSA Times
Newsletter Login
  • Home
  • United States
  • World
  • Politics
  • Business
  • Lifestyle
  • Entertainment
  • Health
  • Science
  • Tech
  • Sports
  • More
    • Web Stories
    • Editor’s Picks
    • Press Release
USA TimesUSA Times
Home » ‘Not how you build a digital mind’: How reasoning failures are preventing AI models from achieving human-level intelligence
‘Not how you build a digital mind’: How reasoning failures are preventing AI models from achieving human-level intelligence
Science

‘Not how you build a digital mind’: How reasoning failures are preventing AI models from achieving human-level intelligence

News RoomBy News RoomApril 2, 20262 ViewsNo Comments

Architectural constraints in today’s most popular artificial intelligence (AI) tools may limit how much more intelligent they can get, new research suggests.

A study published Feb. 5 on the preprint arXiv server argues that modern large language models (LLMs) are inherently prone to breakdowns in their problem-solving logic, known as “reasoning failures.”

Reasoning failures occur when an LLM loses track of key information needed to reliably solve a task, resulting in incorrect answers to seemingly straightforward problems. The paper, which was presented as a review of existing research, looked specifically at transformer models, a type of neural network architecture that underpins popular AI chatbots including ChatGPT, Claude and Google Gemini.


You may like

Based on LLMs’ performance on evaluations such as Humanity’s Last Exam, some scientists say the underlying neural network architecture can one day lead to a model capable of reaching human-level cognition. While transformer architecture makes LLMs extremely capable at tasks like language generation, the researchers argue that it also inhibits the kind of reliable logical processes needed to achieve true human-level reasoning.

“LLMs have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks,” the researchers said in the study. “Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios … This failure is attributed to an inability of holistic planning and in-depth thinking.”

Limitations with LLMs

LLMs are trained on huge amounts of text data and generate responses to user prompts by predicting, word by word, a plausible answer. They do this by stringing together units of text, called “tokens,” based on statistical patterns learned from their training data.

Transformers also use a mechanism called “self-attention” to keep track of relationships between words and concepts over long strings of text. Self-attention, combined with their massive training databases, is what makes modern chatbots so good at generating convincing answers to user prompts.

Get the world’s most fascinating discoveries delivered straight to your inbox.

However, LLMs don’t do any actual “thinking” in the conventional sense. Instead, their responses are determined by an algorithm. For long tasks, particularly those that require genuine problem-solving across multiple steps, transformers can lose track of key information and default to the patterns learned from their training data. This results in reasoning failures.

It’s not real reasoning in the human sense — it’s still just next‑token prediction dressed up as a chain of thought

Federico Nanni, senior research data scientist at the Alan Turing Institute

“This fundamental weakness extends beyond basic tasks, to compositions of math problems, multi-fact claim verification, and other inherently compositional tasks,” the researchers said in the study.

Reasoning failures are also why LLMs often circle the same response to a user query even after being told it’s incorrect, or produce a different answer to the same question when it’s phrased slightly differently, even when it’s prompted to explain its reasoning step by step.


What to read next

Federico Nanni, a senior research data scientist at the U.K’s Alan Turing Institute, argues that what LLMs typically present as reasoning is mostly window dressing.

“People figured out that if you tell an LLM, instead of answering directly, to ‘think step by step’ and write out a reasoning process first, it often gets the right answer,” Nanni told Live Science. “But that’s a trick. It’s not real reasoning in the human sense — it’s still just next‑token prediction dressed up as a chain of thought,” he said. “When we say these models ‘reason,’ what we actually mean is that they write out a reasoning process — something that sounds like a plausible chain of reasoning.”

Gaps in existing AI benchmarks

Current ways to assess LLM performance fall short in three key areas, the researchers found. First, results can be affected by rewording a prompt. Second, benchmarks degrade and become contaminated the more they’re used. And finally, they only assess the outcome, rather than the reasoning process a model used to reach its conclusion.

This means current benchmarks may significantly overstate how capable LLMs are and understate how often they fail in real-world use.

LLMs’ performances may mean they have limited real world applications. (Image credit: da-kuk/Getty Images)

“Our position is not that benchmarks are flawed, but that they need to evolve,” study co-author Peiyang Song, a computer science and robotics student at Caltech, told Live Science via email. Likewise, benchmarks tend to leak into LLM training data, Nanni said, meaning subsequent LLMs figure out how to trick them.

“On top of that, now that models are deployed in production, usage itself becomes a kind of benchmark,” Nanni said. “You put the system in front of users and see what goes wrong — that’s the new test. So yes, we need better benchmarks, and we need to rely less on AI to check AI. But that’s very hard in practice, because these tools are now woven into how we work, and it’s extremely convenient to just use them.”

A new architecture for AGI?

Unlike other recent research, the new study doesn’t argue that neural-network approaches to AI are a dead end in the quest to achieve artificial general intelligence (AGI). Rather, the researchers liken it to the early days of computing, noting that understanding why LLMs fail is key to improving them.

However, they do argue that simply training models on more data or scaling them up are unlikely to resolve the issue on their own. This means developing AGI may require a fundamentally different approach to how models are built.

“Neural networks, and LLMs in particular, are clearly part of the AGI picture. Their progress has been extraordinary,” Song said. “However, our survey suggests that scaling alone is unlikely to resolve all reasoning failures … [meaning] reaching human-level reasoning may require architectural innovations, stronger world models, improved robustness training, and deeper integration with structured reasoning and embodied interaction.”

Nanni agreed. “From a philosophy‑of‑mind point of view, I’d say we’ve basically found the limits of transformers. They’re not how you build a digital mind,” he said. “They model text extremely well, to the point that it’s almost impossible to tell if a passage was written by a human or a machine. “But that’s what they are: language models … There’s only so far you can push this architecture.”

Share. Facebook Twitter LinkedIn Telegram WhatsApp Email

Keep Reading

Chinese satellite with robotic ‘octopus arm’ passes key refueling test in orbit — making longer-lived space assets more likely

Chinese satellite with robotic ‘octopus arm’ passes key refueling test in orbit — making longer-lived space assets more likely

Scientists cured type 1 diabetes in mice by creating a blended immune system

Scientists cured type 1 diabetes in mice by creating a blended immune system

Native Americans invented dice and games of chance more than 12,000 years ago, archaeological study reveals

Native Americans invented dice and games of chance more than 12,000 years ago, archaeological study reveals

Artemis II blasts off: Humans are on their way back to the moon

Artemis II blasts off: Humans are on their way back to the moon

Astronauts can face ‘nearly lethal doses’ of solar radiation — so why launch Artemis II during the sun’s peak of activity? Space scientist Patricia Reiff explains.

Astronauts can face ‘nearly lethal doses’ of solar radiation — so why launch Artemis II during the sun’s peak of activity? Space scientist Patricia Reiff explains.

Farting comet seen reversing its spin for the first time ever‬ and may soon self-destruct, Hubble photos reveal

Farting comet seen reversing its spin for the first time ever‬ and may soon self-destruct, Hubble photos reveal

Extreme wildfires, droughts and storms could happen even under moderate global warming, study finds

Extreme wildfires, droughts and storms could happen even under moderate global warming, study finds

Artemis II launch LIVE: NASA begins final checks before today’s planned launch of historic moon mission

Artemis II launch LIVE: NASA begins final checks before today’s planned launch of historic moon mission

Diagnostic dilemma: Teenager’s hives turned out to be caused by rare water allergy

Diagnostic dilemma: Teenager’s hives turned out to be caused by rare water allergy

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Laura Rutledge opens up on ‘Monday Night Football’ anxiety after big ESPN promotion

Laura Rutledge opens up on ‘Monday Night Football’ anxiety after big ESPN promotion

April 2, 2026
Chinese satellite with robotic ‘octopus arm’ passes key refueling test in orbit — making longer-lived space assets more likely

Chinese satellite with robotic ‘octopus arm’ passes key refueling test in orbit — making longer-lived space assets more likely

April 2, 2026
FDA approves new daily GLP-1 pill — what to know, including how they compare to Wegovy tablets

FDA approves new daily GLP-1 pill — what to know, including how they compare to Wegovy tablets

April 2, 2026
Lead-tainted dinosaur chicken nuggets sold at Walmart spark nationwide health alert

Lead-tainted dinosaur chicken nuggets sold at Walmart spark nationwide health alert

April 2, 2026

Subscribe to News

Get the latest USA news and updates directly to your inbox.

Latest News
Trump tells Pam Bondi her time is running out as attorney general

Trump tells Pam Bondi her time is running out as attorney general

April 2, 2026
Cotton, Schumer bill would ban Chinese robots from federal agencies

Cotton, Schumer bill would ban Chinese robots from federal agencies

April 2, 2026
19 Viral Spring 2026 Fashion Essentials That Will Sell Out Fast on Amazon

19 Viral Spring 2026 Fashion Essentials That Will Sell Out Fast on Amazon

April 2, 2026
Facebook X (Twitter) Pinterest WhatsApp TikTok Instagram
© 2026 USA Times. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.