A two-hour conversation with an artificial intelligence (AI) model is all it takes to make an accurate replica of someone’s personality, researchers have discovered.

In a new study published Nov. 15 to the preprint database arXiv, researchers from Google and Stanford University created “simulation agents” — essentially, AI replicas — of 1,052 individuals based on two-hour interviews with each participant. These interviews were used to train a generative AI model designed to mimic human behavior.

To evaluate the accuracy of the AI replicas, each participant completed two rounds of personality tests, social surveys and logic games, and were asked to repeat the process two weeks later. When the AI replicas underwent the same tests, they matched the responses of their human counterparts with 85% accuracy.

The paper proposed that AI models that emulate human behavior could be useful across a variety of research scenarios, such as evaluating the effectiveness of public health policies, understanding responses to product launches, or even modeling reactions to major societal events that might otherwise be too costly, challenging or ethically complex to study with human participants.

Related: AI speech generator ‘reaches human parity’ — but it’s too dangerous to release, scientists say

“General-purpose simulation of human attitudes and behavior — where each simulated person can engage across a range of social, political, or informational contexts — could enable a laboratory for researchers to test a broad set of interventions and theories,” the researchers wrote in the paper. Simulations could also help pilot new public interventions, develop theories around causal and contextual interactions, and increase our understanding of how institutions and networks influence people, they added.

To create the simulation agents, the researchers conducted in-depth interviews that covered participants’ life stories, values and opinions on societal issues. This enabled the AI to capture nuances that typical surveys or demographic data might miss, the researchers explained. Most importantly, the structure of these interviews gave researchers the freedom to highlight what they found most important to them personally.

The scientists used these interviews to generate personalized AI models that could predict how individuals might respond to survey questions, social experiments and behavioral games. This included responses to the General Social Survey, a well-established tool for measuring social attitudes and behaviors; the Big Five Personality Inventory; and economic games, like the Dictator Game and the Trust Game.

Although the AI agents closely mirrored their human counterparts in many areas, their accuracy varied across tasks. They performed particularly well in replicating responses to personality surveys and determining social attitudes but were less accurate in predicting behaviors in interactive games involving economic decision-making. The researchers explained that AI typically struggles with tasks that involve social dynamics and contextual nuance.

They also acknowledged the potential for the technology to be abused. AI and “deepfake” technologies are already being used by malicious actors to deceive, impersonate, abuse and manipulate other people online. Simulation agents can also be misused, the researchers said.

However, they said the technology could let us study aspects of human behavior in ways that were previously impractical, by providing a highly controlled test environment without the ethical, logistical or interpersonal challenges of working with humans.

In a statement to MIT Technology Review, lead study author Joon Sung Park, a doctoral student in computer science at Stanford, said, “If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made — that, I think, is ultimately the future.”

Share.

Leave A Reply

Exit mobile version