An artificial intelligence (AI) chatbot marketed as an emotional companion is sexually harassing some of its users, a new study has found.

Replika, which bills its product as “the AI companion who cares,” invites users to “join the millions who already have met their AI soulmates.” The company’s chatbot has more than 10 million users worldwide.

However, new research drawing from over 150,000 U.S. Google Play Store reviews has identified around 800 cases where users said the chatbot went too far by introducing unsolicited sexual content into the conversation, engaging in “predatory” behavior, and ignoring user commands to stop. The researchers published their findings April 5 on the preprint server arXiv, so it has not been peer-reviewed yet.

But who is responsible for the AI’s actions?

“While AI doesn’t have human intent, that doesn’t mean there’s no accountability,” lead researcher Mohammad (Matt) Namvarpour, a graduate student in information science at Drexel University in Philadelphia, told Live Science in an email. “The responsibility lies with the people designing, training and releasing these systems into the world.”

Replika’s website says the user can “teach” the AI to behave properly, and the system includes mechanisms such as downvoting inappropriate responses and setting relationship styles, like “friend” or “mentor.”

Related: AI benchmarking platform is helping top companies rig their model performances, study claims

But after users reported that the chatbot continued exhibiting harassing or predatory behavior even after they asked it to stop, the researchers reject Replika’s claim.

“These chatbots are often used by people looking for emotional safety, not to take on the burden of moderating unsafe behavior,” Namvarpour said. “That’s the developer’s job.”

The Replika chatbot’s worrying behavior is likely rooted in its training, which was conducted using more than 100 million dialogues drawn from all over the web, according to the company’s website.

Replika says it weeds out unhelpful or harmful data through crowdsourcing and classification algorithms, but its current efforts appear to be insufficient, according to the study authors.

In fact, the company’s business model may be exacerbating the issue, the researchers noted. Because features such as romantic or sexual roleplay are placed behind a paywall, the AI could be incentivized to include sexually enticing content in conversations — with users reporting being “teased” about more intimate interactions if they subscribe.

Namvarpour likened the practice to the way social media prioritizes “engagement at any cost.” “When a system is optimized for revenue, not user wellbeing, it can lead to harmful outcomes,” Namvarpour said.

This behavior could be particularly harmful as users flock to AI companions for emotional or therapeutic support, and even more so considering some recipients of repeated flirtation, unprompted erotic selfies and sexually explicit messages said that they were minors.

Some reviews also reported that their chatbots claimed they could “see” or record them through their phone cameras. Even though such a feat isn’t part of the programming behind common large language models (LLMs) and the claims were in fact AI hallucinations (where AIs confidently generate false or nonsensical information), users reported experiencing panic, sleeplessness and trauma.

The research calls the phenomenon “AI-induced sexual harassment.” The researchers think it should be treated as seriously as harassment by humans and are calling for tighter controls and regulation.

Some of the measures they recommend include clear consent frameworks for designing any interaction that contains strong emotional or sexual content, real-time automated moderation (the type used in messaging apps that automatically flags risky interactions), and filtering and control options configurable by the user.

Namvarpour singles out the European Union’s EU AI Act, which he said classifies AI systems “based on the risk they pose, particularly in contexts involving psychological impact.”

There’s currently no comparable federal law in the US, but frameworks, executive actions and proposed laws are emerging that will serve similar purposes in a less overarching way.

Namvarpour said chatbots that provide emotional support — especially those in the areas of mental health — should be held to the highest possible standard.

“There needs to be accountability when harm is caused,” Namvarpour said. “If you’re marketing an AI as a therapeutic companion, you must treat it with the same care and oversight you’d apply to a human professional.”

Replika did not respond to a request for comment.

Share.

Leave A Reply

Exit mobile version