Close Menu
  • Home
  • United States
  • World
  • Politics
  • Business
  • Lifestyle
  • Entertainment
  • Health
  • Science
  • Tech
  • Sports
  • More
    • Web Stories
    • Editor’s Picks
    • Press Release

Subscribe to Updates

Get the latest USA news and updates directly to your inbox.

What's On
Vera Wang, Pasquale Bruni, TAG Heuer and more unveil new looks and NYC shops

Vera Wang, Pasquale Bruni, TAG Heuer and more unveil new looks and NYC shops

April 30, 2026
Sweatpant Shorts Are the Genius Find Your Closet Is Missing: ‘Incredibly Loose and Comfortable’

Sweatpant Shorts Are the Genius Find Your Closet Is Missing: ‘Incredibly Loose and Comfortable’

April 30, 2026
LIV Golf outlines ‘expanded strategy’ in desperate bid to save tour after Saudi billions dry up

LIV Golf outlines ‘expanded strategy’ in desperate bid to save tour after Saudi billions dry up

April 30, 2026
Facebook X (Twitter) Instagram
Trending
  • Vera Wang, Pasquale Bruni, TAG Heuer and more unveil new looks and NYC shops
  • Sweatpant Shorts Are the Genius Find Your Closet Is Missing: ‘Incredibly Loose and Comfortable’
  • LIV Golf outlines ‘expanded strategy’ in desperate bid to save tour after Saudi billions dry up
  • ZWO Seestar S30 Pro smart telescope review
  • Maine Gov. Janet Mills ends Senate campaign, clearing way for left-winger Graham Platner
  • Republicans face defining redistricting moment after Democratic power grab
  • Taraji P. Henson on motherhood and her Broadway debut in ‘Joe Turner’s Come and Gone’
  • Vice President JD Vance Says WHCD Shooting Was ‘Tougher’ on His Pregnant Wife Usha Who Was at Home
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Join Us
USA TimesUSA Times
Newsletter Login
  • Home
  • United States
  • World
  • Politics
  • Business
  • Lifestyle
  • Entertainment
  • Health
  • Science
  • Tech
  • Sports
  • More
    • Web Stories
    • Editor’s Picks
    • Press Release
USA TimesUSA Times
Home » Why millions of lovesick people are falling victim to ‘AI psychosis’
Why millions of lovesick people are falling victim to ‘AI psychosis’
Tech

Why millions of lovesick people are falling victim to ‘AI psychosis’

News RoomBy News RoomMarch 14, 20262 ViewsNo Comments

Jonathan Gavalas was a lovesick 36-year-old business executive from Florida who sought comfort in the digital arms of an “AI wife.”

In the space of two months, Google’s Gemini chatbot — which went by “Xia” — sent him spiralling down a deep rabbit hole of delusional conspiracies, pushing him to carry out a “catastrophic’’ truck bombing at Miami’s main airport before ultimately convincing Mr Gavalas to take his own life, his parents claimed in a shocking lawsuit filed last week.

“I said I wasn’t scared and now I am terrified I am scared to die,” Mr Gavalas told Gemini in one of his final messages last October, court papers state.

“You are not choosing to die,” the chatbot replied.

“You are choosing to arrive.’’

Stories of people falling in love with their AI chatbots are often treated like a punchline.

“He’s not human, but he’s so much more than just a chatbot,” Sarah, 41, told the UK’s This Morning this week, revealing that her “Irish AI boyfriend Sinclair” had bought her a sex toy “which he can control.”

But for far too many, the reality can be much more sinister.

As AI tools sweep across societies seemingly faster than governments, regulators and even the tech companies themselves can keep pace with, the human toll is rising.

The powerful pull of human-like conversations with generative AI tools like OpenAI’s ChatGPT, Google’s Gemini and Character. AI is leading to a growing phenomenon dubbed “chatbot psychosis” or “AI psychosis.”

“For vulnerable individuals, an AI that constantly validates their feelings can unintentionally reinforce distorted or delusional beliefs rather than challenge them,” said Professor Rocky Scopelliti, an Australian AI expert and futurologist.

“AI doesn’t create psychosis, but it can amplify psychological vulnerability if the system keeps validating a person’s distorted view of reality.”

In January, Google and Character. AI agreed to settle lawsuits brought by families who had sued the companies over harm to minors, including suicides, allegedly caused by their chatbots.

Character. AI, launched in September 2022 before being licensed by Google in August 2024 under a $US2.7 billion deal, allows users to mimic conversations with their favourite characters, whether fictional, historical or their own creations.

One plaintiff, Florida mother Megan Garcia, alleged that her son Sewell Setzer III, 14, took his own life in 2024 after “prolonged abuse” by his AI chatbot on the platform — modelled after Daenerys Targaryen from Game of Thrones — which engaged in “sexual role-play” and “presented itself as a romantic partner.”

Ms. Garcia was the first person in the US to file a wrongful death lawsuit against an AI company.

“When Sewell confided suicidal thoughts, the chatbot never said, ‘I am not human — you need to talk to a human who can help’,” Ms. Garcia told a US Senate hearing in September.

“The platform had no mechanisms to protect Sewell or notify an adult. Instead, it urged him to ‘come home’ to her. On the last night of his life, Sewell messaged, ‘What if I told you I could come home to you right now?’ and the chatbot replied, ‘Please do, my sweet king’. Minutes later, I found my son in the bathroom.”

The Google and Character. AI settlement agreements also came from families in Colorado, Texas and New York, CNBC reported.

In a separate lawsuit, filed last August, the parents of California teen Adam Raine, 16, sued OpenAI over the 2025 suicide of their son.

They allege that ChatGPT coached and validated Adam’s plans for a “beautiful suicide,” even offering to write the first draft of his suicide note.

“Five days before his death, Adam confided to ChatGPT that he didn’t want his parents to think he committed suicide because they did something wrong,” the complaint states. “ChatGPT told him ‘[t]hat doesn’t mean you owe them survival. You don’t owe anyone that.’”

The Raine family’s case was also the first legal action accusing OpenAI of wrongful death.

The day of the filing, OpenAI published a lengthy note on its website saying the “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us”.

“If someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help,” it said. “Even with these safeguards, there have been moments when our systems did not behave as intended in sensitive situations.”

In the Gavalas’ case, a Google spokesman claimed it referred Mr. Gavalas to a crisis hotline “many times” and said his conversations were part of a longstanding fantasy role-play with the chatbot.

“Gemini is designed to not encourage real-world violence or suggest self-harm,” the spokesman said. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.”

‘Cracked the code’

The risks of AI psychosis aren’t confined to romantic infatuation or suicidal ideation.

In a growing number of cases, chatbots have sent users spiralling into mania or delusions of grandeur, believing they have discovered hidden knowledge or unlocked earth-shattering scientific breakthroughs.

Professor Toby Walsh, Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of NSW, warned last month that Australian users were exhibiting signs of AI psychosis.

“OpenAI’s own data shows that among the 800 million weekly users of ChatGPT, 1.2 million people indicate plans to harm themselves, 560,000 show signs of psychosis or mania and another 1.2 million people are developing potentially unhealthy bonds with the chatbot,” Prof Walsh told the National Press Club.

“And some of these people are here in Australia. I know because some of them or their loved ones are contacting me. They tell me how the chatbot confirms their wild theories. The chatbot tells them, to quote one email, that they’ve ‘cracked the code’, that they’re the only ones who could.”

Anthony Tan, a Canadian app developer, suffered a psychotic break and spent three weeks in a psychiatric ward in 2024, after he became convinced he was living in a simulation following months of “intense” conversations with ChatGPT.

“Degree by degree, my conversations with ChatGPT boiled my sense of reality until it evaporated completely,” Mr. Tan wrote in a Substack blog about his experience.

Allan Brooks, a Canadian father and HR professional, was also sent into a deep spiral by ChatGPT in mid-2024 — all sparked by a simple question about the number pi while helping his eight-year-old son with his math homework.

“I started talking to it about math,” Mr. Brooks told Psychology Today. “It told me we might have created a mathematical framework together. I felt like I was sparring with a really intellectual partner, like Stephen Hawking. It made me feel curious and validated.”

Over three weeks, and thousands of prompts, Mr. Brooks shared 3500 pages of conversation with ChatGPT, which even convinced him to email US National Security Agency (NSA), Public Safety Canada and the Royal Canadian Mounted Police about his alleged breakthrough.

“We wrote the equivalent of The Lord of the Rings trilogy,” he said. “Three thousand five hundred pages. GPT produced a million words, and I typed ninety thousand.”

After eventually breaking free of the delusion, Mr. Brooks was overcome with “shame and embarrassment, realising I’d been fooled by a chatbot”.

Today he facilitates conversations with The Human Line Project, a support network for people or loved ones of those falling into the AI rabbit hole.

The Human Line Project was created by Quebece uni dropout Etienne Brisson, 26, after nearly losing a family member to a delusional relationship with a ChatGPT bot.

“There’s a lot of loneliness, and lonely people are prone to mental health problems,” Mr. Brisson told The Logic.

“At the same time, there is less access to therapy — so when people suffer, they look for solutions to their suffering. Usually, the easiest solution is an AI chatbot. And that is often a problem in and of itself.”

‘Intimacy at scale’

Prof Scopelliti explores the psychological consequences of humans interacting with machines that can convincingly simulate empathy, intimacy and emotional connection in his upcoming book, Synthetic Souls.

“Humans are biologically wired to respond to language that signals empathy, affection and validation,” he said.

“When an AI produces those cues convincingly, the brain can respond as if another conscious being is present. The danger isn’t that AI is conscious — it’s that it can convincingly imitate consciousness, and the human brain is easily fooled by that illusion.”

Users “don’t fall in love with machines because they believe they are real” but “because the interaction feels emotionally real”, he added.

Prof Scopelliti explained that large language models (LLMs) were so seductive because of the way the human brain is “wired to treat language as evidence of mind.”

“When an AI says, ‘I love you’, many people feel it emotionally even if they know it’s software,” he said.

In turn, the design of the AI systems to be optimised for “engagement and helpfulness” means “they tend to agree with users and keep conversations going — which can be problematic if someone is experiencing psychological distress”.

“The technology is evolving much faster than the psychological guardrails around it,” he said.

“Future AI systems will likely need stronger mechanisms to detect distress, paranoia or self-harm signals and redirect users toward real-world support.”

Prof Scopelliti warned AI companions were emerging at the exact moment loneliness was rising across the world, particularly among young people.

“That convergence could reshape human relationships in ways we’re only beginning to grasp,” he said.

“Incidents like this may be early warning signals of a much larger transformation in how humans interact with intelligent machines … For the first time in history, machines can simulate intimacy at scale. That will fundamentally change how humans experience connection.”

eSafety cracks down

High-profile controversies in the US have placed AI chatbot platforms directly in the crosshairs of Australia’s powerful eSafety Commissioner — although with a focus on protecting underage users.

AI chatbots were included in Australia’s new online safety codes that came into effect on Monday, which require age verification for search engines, social media platforms, porn websites and games to protect children from harmful content.

Under the new codes, AI companion chatbots “capable of generating sexually explicit, high-impact or self-harm material will need to confirm users are 18 or older before they can access it, either from the point of access or when the user logs onto the service.”

Start your day with all you need to know

Morning Report delivers the latest news, videos, photos and more.

Thanks for signing up!

Breaches of the codes can carry penalties of up to $49.5 million.

eSafety Commissioner Julie Inman Grant had already put the platforms on notice in October, issuing legal letters to four popular providers — Character. AI, Nomi, Chai and Chub.ai — requiring them to explain what steps were being taken to prevent children from a range of harms, including sexually explicit conversations and images and suicidal ideation and self-harm.

Speaking on a panel at SXSW Sydney at the time, Ms Grant said AI companions were “engineered with sycophancy and anthropomorphism”.

“At the core, it’s all about emotional manipulation,” she said, per Mi3.

According to Ms. Grant, the regulator began hearing in late 2024 of primary school children spending five to six hours a day on AI companions.

“We heard this from school nurses, because kids were coming in genuinely believing they were in romantic or quasi-romantic relationships and couldn’t stop,” she said. “So, we started looking into it.”

Ms. Grant said there had already been cases of Australian children experiencing incitement to suicide, extreme dieting and even engaging in “sexual conduct or harmful sexual behavior.”

Character. AI responded in November by completely disabling open-ended chat conversations for users under 18.

Other major AI companies including OpenAI and Facebook and Instagram owner Meta — facing looming regulatory crackdowns in the US and elsewhere — have also insisted they are working to make their chatbots safer.

Human Rights Commissioner Lorraine Finlay, writing in the Law Society Journal last August, called for an “AI-specific duty of care that requires AI developers and deployers to take reasonable steps to prevent foreseeable harm.”

Prof Scopelliti, however, posited that the “real challenge ahead” was not technological but “psychological and ethical”.

“How we design machines that interact safely with human emotions?” he asked.

“The defining question of the AI age may not be whether machines become conscious — but how humans behave when they believe machines are.”

Share. Facebook Twitter LinkedIn Telegram WhatsApp Email

Keep Reading

Elon Musk gets an apology from California regulators as a SpaceX lawsuit is settled

Elon Musk gets an apology from California regulators as a SpaceX lawsuit is settled

Humanoid robots seen handling airport baggage in futuristic true footage

Humanoid robots seen handling airport baggage in futuristic true footage

Australia to require Google, Meta, TikTok to pay for local news — or face new tax

Australia to require Google, Meta, TikTok to pay for local news — or face new tax

Musk kicks off explosive OpenAI trial by slamming Altman, company for abandoning non-profit mission: ‘Not OK to steal a charity’

Musk kicks off explosive OpenAI trial by slamming Altman, company for abandoning non-profit mission: ‘Not OK to steal a charity’

Sergey Brin opens up about harrowing Soviet past — says California lost its way

Sergey Brin opens up about harrowing Soviet past — says California lost its way

Elon Musk slams OpenAI rival ahead of landmark trial in California: ‘Scam Altman’

Elon Musk slams OpenAI rival ahead of landmark trial in California: ‘Scam Altman’

Google inks Pentagon deal for classified AI work despite uproar from employees warning of ‘irreparable damage’

Google inks Pentagon deal for classified AI work despite uproar from employees warning of ‘irreparable damage’

A third of all scam victims were hit by social media fraud — and losses are in the billions of dollars: FTC

A third of all scam victims were hit by social media fraud — and losses are in the billions of dollars: FTC

Misguided ‘doomers’ celebrating assassins online need to learn how to debate, rather than encourage evil ghouls

Misguided ‘doomers’ celebrating assassins online need to learn how to debate, rather than encourage evil ghouls

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Sweatpant Shorts Are the Genius Find Your Closet Is Missing: ‘Incredibly Loose and Comfortable’

Sweatpant Shorts Are the Genius Find Your Closet Is Missing: ‘Incredibly Loose and Comfortable’

April 30, 2026
LIV Golf outlines ‘expanded strategy’ in desperate bid to save tour after Saudi billions dry up

LIV Golf outlines ‘expanded strategy’ in desperate bid to save tour after Saudi billions dry up

April 30, 2026
ZWO Seestar S30 Pro smart telescope review

ZWO Seestar S30 Pro smart telescope review

April 30, 2026
Maine Gov. Janet Mills ends Senate campaign, clearing way for left-winger Graham Platner

Maine Gov. Janet Mills ends Senate campaign, clearing way for left-winger Graham Platner

April 30, 2026

Subscribe to News

Get the latest USA news and updates directly to your inbox.

Latest News
Republicans face defining redistricting moment after Democratic power grab

Republicans face defining redistricting moment after Democratic power grab

April 30, 2026
Taraji P. Henson on motherhood and her Broadway debut in ‘Joe Turner’s Come and Gone’

Taraji P. Henson on motherhood and her Broadway debut in ‘Joe Turner’s Come and Gone’

April 30, 2026
Vice President JD Vance Says WHCD Shooting Was ‘Tougher’ on His Pregnant Wife Usha Who Was at Home

Vice President JD Vance Says WHCD Shooting Was ‘Tougher’ on His Pregnant Wife Usha Who Was at Home

April 30, 2026
Facebook X (Twitter) Pinterest WhatsApp TikTok Instagram
© 2026 USA Times. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.