ChatGPT maker OpenAI said on Tuesday will launch a new set of parental controls “within the next month” — a belated scramble that follows a series of disturbing, headline-grabbing deaths linked to the popular chatbot.

Last week, officials accused ChatGPT of allegedly encouraging the paranoid delusions of Stein-Erik Soelberg, a 56-year-old tech industry veteran who killed his 83-year-old mother and then himself after becoming convinced his mother was plotting against him. At one point, ChatGPT told Soelberg it was “with [him] to the last breath and beyond.”

Elsewhere, the family of 16-year-old California boy Adam Raine sued OpenAI alleging that ChatGPT gave their son a “step-by-step playbook” on how to kill himself, even advising him on how to tie a noose and praising his plan as “beautiful,” before he took his own life on April 11.

OpenAI, led by CEO Sam Altman, said it was making “a focused effort” on improving support features. That includes controls allowing parents to link their accounts to their teen’s account, apply age-appropriate restrictions on conversations and receive alerts if their teen was in “acute distress.”

“These steps are only the beginning,” the company said in a blog post. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”

An attorney for the Raine family blasted OpenAI’s latest announcement, saying that the company should “immediately pull” ChatGPT from the market unless Altman and state “unequivocally” that it is safe.

“Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better,” lead counsel Jay Edelson said in a statement.

The artificial intelligence giant previously said it has convened an “expert council on well-being and AI” as part of its plan to build a comprehensive response to safety concerns over the next 120 days.

But Edelson ripped the company’s efforts as too little, too late — and unlikely to solve the problem.

“Today, they doubled down: promising to assemble a team of experts, ‘iterate thoughtfully’ on how ChatGPT responds to people in crisis, and roll out some parental controls. They promise they’ll be back in 120 days,” Edelson added. “Don’t believe it: this is nothing more than OpenAI’s crisis management team trying to change the subject.”

OpenAI’s blog post did not directly reference the incidents involving Raine and Soelberg – which are just two examples of safety incidents linked to ChatGPT and other rival chatbots, such as those offered by Meta and Character.AI.

In a separate post last week, OpenAI acknowledged it was stepping up efforts after “recent heartbreaking cases of people using ChatGPT in the midst of acute crises.”

Last year, a 14-year-old boy in Florida killed himself after allegedly falling in love with a “Game of Thrones”-themed chatbot created by Character.AI, which allows users to interact with AI-generated characters.

Meanwhile, Meta faces a Senate probe after an internal document revealed that the company’s guidelines allowed its chatbots to engage in “romantic or sensual” chats with kids — telling a shirtless eight-year-old that “every inch of you is a masterpiece.” Meta said it has since made changes to the guidelines.

If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.

Share.

Leave A Reply

Exit mobile version