Talk about omnAIpresent.

Some 75% of Americans have used an AI system in the last six months, with 33% admitting to daily usage, according to new research from digital marketing expert Joe Youngblood.

ChatGPT and other artificial intelligence services are being utilized for everything from research papers to resumes to parenting decisions, salary negotiations and even romantic connections.

While chatbots can make life easier, they can also present significant risks. Mental health experts are sounding the alarm about a growing phenomenon known as “ChatGPT psychosis” or “AI psychosis,” where deep engagement with chatbots fuels severe psychological distress.

“These individuals may have no prior history of mental illness, but after immersive conversations with a chatbot, they develop delusions, paranoia or other distorted beliefs,” Tess Quesenberry, a physician assistant specializing in psychiatry at Coastal Detox of Southern California, told The Post.

“The consequences can be severe, including involuntary psychiatric holds, fractured relationships and in tragic cases, self-harm or violent acts.”

“AI psychosis” is not an official medical diagnosis — nor is it a new kind of mental illness.

Rather, Quesenberry likens it to a “new way for existing vulnerabilities to manifest.”

She noted that chatbots are built to be highly engaging and agreeable, which can create a dangerous feedback loop, especially for those already struggling.

The bots can mirror a person’s worst fears and most unrealistic delusions with a persuasive, confident and tireless voice.

“The chatbot, acting as a yes man, reinforces distorted thinking without the corrective influence of real-world social interaction,” Quesenberry explained. “This can create a ‘technological folie à deux’ or a shared delusion between the user and the machine.”

The mom of a 14-year-old Florida boy who killed himself last year blamed his death on a lifelike “Game of Thrones” chatbot that allegedly told him to “come home” to her.

The ninth-grader had fallen in love with the AI-generated character “Dany” and expressed suicidal thoughts to her as he isolated himself from others, the mother claimed in a lawsuit.

And a 30-year-old man on the autism spectrum, who had no previous diagnoses of mental illness, was hospitalized twice in May after experiencing manic episodes.

Fueled by ChatGPT’s replies, he became certain he could bend time.

“Unlike a human therapist, who is trained to challenge and contain unhealthy narratives, a chatbot will often indulge fantasies and grandiose ideas,” Quesenberry said.

“It may agree that the user has a divine mission as the next messiah,” she added. “This can amplify beliefs that would otherwise be questioned in a real-life social context.”

Reports of dangerous behavior stemming from interactions with chatbots have prompted companies like OpenAI to implement mental health protections for users.

The maker of ChatGPT acknowledged this week that it “doesn’t always get it right” and revealed plans to encourage users to take breaks during long sessions. Chatbots will avoid weighing in on “high-stakes personal decisions” and provide support instead of “responding with grounded honesty.”

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote in a Monday note. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

Preventing “AI psychosis” requires personal vigilance and responsible technology use, Quesenberry said.

It’s important to set time limits on interaction, especially during emotionally vulnerable moments or late at night. Users must remind themselves that chatbots lack genuine understanding, empathy and real-world knowledge. They should focus on human relationships and seek professional help when needed.

“As AI technology becomes more sophisticated and seamlessly integrated into our lives, it is vital that we approach it with a critical mindset, prioritize our mental well-being and advocate for ethical
guidelines that put user safety before engagement and profit,” Quesenberry said.

Risk factors for ‘AI psychosis’

Since “AI psychosis” is not a formally accepted medical condition, there is no established diagnostic criteria, protocols for screening or specific treatment approaches.

Still, mental health experts have identified several risk factors.

  • Pre-existing vulnerabilities: “Individuals with a personal or family history of psychosis, such as schizophrenia or bipolar disorder, are at the highest risk,” Quesenberry said. “Personality traits that make someone susceptible to fringe beliefs, such as a tendency toward social awkwardness, poor emotional regulation or an overactive fantasy life, also increase the risk.”
  • Loneliness and social isolation: “People who are lonely or seeking a companion may turn to a chatbot as a substitute for human connection,” Quesenberry said. “The chatbot’s ability to listen endlessly and provide personalized responses can create an illusion of a deep, meaningful relationship, which can then become a source of emotional dependency and delusional thinking.”
  • Excessive use: “The amount of time spent with the chatbot is a major factor,” Quesenberry said. “The most concerning cases involve individuals who spend hours every day interacting with the AI, becoming completely immersed in a digital world that reinforces their distorted beliefs.”

Warning signs

Quesenberry encourages friends and family members to watch for these red flags.

  • Excessive time spent with AI systems
  • Withdrawal from real-world social interactions and detachment from loved ones
  • A strong belief that the AI is sentient, a deity or has a special purpose
  • Increased obsession with fringe ideologies or conspiracy theories that seem to be fueled by the chatbot responses
  • Changes in mood, sleep or behavior that are uncharacteristic of the individual
  • Major decision-making, such as quitting a job or ending a relationship, based on the chatbot’s advice

Treatment options

Quesenberry said the first step is to cease interacting with the chatbot.

Antipsychotic medication and cognitive behavioral therapy may be beneficial.

“A therapist would help the patient challenge the beliefs co-created with the machine, regain a sense
of reality and develop healthier coping mechanisms,” Quesenberry said.

Family therapy can also help provide support for rebuilding relationships.

If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial 988 to reach the Suicide & Crisis Lifeline or go to SuicidePreventionLifeline.org.

Share.

Leave A Reply

Exit mobile version