Their powers go beyond AI-mpersonation.

Artificial intelligence doesn’t just look and act human — it supposedly thinks like us as well.

Chinese researchers found the first-ever evidence that AI models like ChatGPT process information similarly to the human mind, detailing the dystopian-seeming discovery in the journal “Nature Machine Intelligence.”

“This provides compelling evidence that the object representations in LLMs (large language models), although not identical to human ones, share fundamental similarities that reflect key aspects of human conceptual knowledge,” wrote the team behind the study, which was a collaboration between the Chinese Academy of Sciences and the South China University of Technology, the Independent reported.

The team reportedly wanted to see if LLM models can “develop humanlike object representations from linguistic and multimodal data (data in different forms such as text, audio, etc).”

To discover whether the AI “bot process” mirrors our cognition, researchers had OpenAI’s ChatGPT-3.5 and Google’s Gemini Pro Vision perform a series of “odd-one-out” trials, in which they were given three items and tasked with selecting the one that doesn’t fit, the South China Morning Post reported.

Remarkably, the AI created 66 conceptual dimensions to sort the objects.

After comparing this cybernetic object sorting to human analysis of the same objects, they found striking similarities between the models’ “perception” and human cognition — particularly when it came to language grouping.

From this, researchers deduced that our psychological doppelgangers “develop human-like conceptual representations of objects.”

“Further analysis showed strong alignment between model embeddings and neural activity patterns” in the region of the brain associated with memory and scene recognition.

Researchers noted that the language-based LLMs were a bit lacking with regard to categorizing visual aspects such as shape or spatial properties.

Meanwhile, research has shown that AI struggles with tasks that require deeper levels of human cognition, such as analogical thinking — drawing comparisons between different things to conclude — while it’s unclear if they comprehend certain objects’ significance or emotional value.

“Current AI can distinguish between cat and dog pictures, but the essential difference between this ‘recognition’ and human ‘understanding’ of cats and dogs remains to be revealed,” said He Huiguang, a professor at the Chinese Academy of Sciences’ (CAS) Institute of Automation.

Nonetheless, the scientists hope that these findings will allow them to develop “more human-like artificial cognitive systems” that can collaborate better with their flesh-and-blood brethren.

Share.

Leave A Reply

Exit mobile version