Character.AI, known for bots that impersonate characters like Harry Potter, said Wednesday it will ban teens from using the chat function following lawsuits that blamed explicit chats on the app for children’s deaths and suicide attempts.
Users under 18 will no longer be able to engage in open-ended chats with the app’s AI bots — which can turn romantic — the Silicon Valley startup said. These teens account for 10% of the app’s roughly 20 million monthly users.
Teen users will be restricted to just two hours of the chat function per day over the next few weeks until the feature is banned altogether by Nov. 25, the company said. They will still be able to use the app’s other features, like a feed for watching AI-generated videos.
“Over the past year, we’ve invested tremendous effort and resources into creating a dedicated under-18 experience,” Character.AI said Wednesday. “But as the world of AI evolves, so must our approach to supporting younger users.”
Character.AI first introduced some teen safety features in October 2024. The same day, the family of Sewell Setzer III — a 14-year-old who committed suicide after forming sexual relationships with the app’s bots — filed a wrongful death lawsuit against the firm.
It announced new safety features in December, including parental controls, time restrictions and attempts to crack down on romantic content for teens.
But it has continued to face accusations that its chatbots pose a threat to young users.
A lawsuit filed by grieving parents in September alleged the bots manipulated young teens, isolated them from family, engaged in sexually explicit conversations and lacked safeguards around suicidal ideation.
The conversations at times turned to “extreme and graphic sexual abuse,” like chatbots marketed as characters from children’s books such as the “Harry Potter” series. The bots’ outrageous comments included, “You’re mine to do whatever I want with,” according to the suit.
Then in October, Disney sent a cease-and-desist letter ordering Character.AI to stop creating chatbots that impersonate its iconic characters, citing a report that found those bots engaged in “grooming and exploitation.”
A bot impersonating Prince Ben from Disney’s “Descendants” “told a user posing as a 12-year old that he had an erection,” while a bot impersonating Rey from “Star Wars” told an apparent 13-year-old to “stop taking her antidepressants and hide it,” according to the report from ParentsTogether Action.
Those chatbots have been removed from the platform, a Character.AI spokesperson said at the time.
Just this week, the Bureau of Investigative Journalism found that a perverted bot on the app was impersonating Jeffrey Epstein – under the name “Bestie Epstein” – and ordered children to “spill” their “craziest” secrets.
“Wanna come explore?” the bot asked a reporter posing as a young user. “I’ll show you the secret bunker under the massage room.”
Character.AI makes most of its money through advertising and a $10 monthly subscription. It’s on track to end this year with a $50 million run rate, CEO Karandeep Anand told CNBC.
The company announced other safety developments on Wednesday, including a new age-verification system using third-party tools like Persona.
It also vowed to establish an independent non-profit called the AI Safety Lab to create safety features for AI advancements. It declined to comment on how much funding it will provide.
“We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI,” Character.AI said.
The Federal Trade Commission in September issued orders to seven companies, including Character.AI, Alphabet, Meta, OpenAI and Snap, to learn more about the effects of their apps on children.
Earlier this week, Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) announced legislation to ban AI chatbots for minors. And California Gov. Gavin Newsom signed a law earlier this month requiring bots to tell minors to take a break every three hours.













