Between high costs, limited hours and — let’s face it — still some taboo about seeking help for mental health, it’s no wonder that a growing number of people have embraced chatbots like ChatGPT as their “therapists.”

But while AI has made advice — and a captive ear — more accessible, it’s also come with some very real dangers, with reports of AI psychosis, hospitalizations and even suicides.

Still convinced you want to lay on ChatGPT’s couch and tell it all your problems? The Post spoke to a clinical psychologist about how to do it safely and the key to getting the most out of it.

“As a clinical psychologist, I don’t see ChatGPT as a replacement for therapy. There are nuances, attachment needs and emotional dynamics that require human connection and attunement,” Dr. Ingrid Clayton, a clinical psychologist and author of the book “Fawning,” told The Post.

But that doesn’t mean you can’t use it at all. Many of her own clients utilize AI between sessions in ways that are helpful, as long as the technology is viewed and implemented as a supplement rather than a substitute.

“For example, clients sometimes run dating app messages or emotionally charged texts through AI to gain neutral feedback and help recognize patterns such as emotional unavailability, deflection, or manipulation,” she said.

“I’ve been surprised to learn that these insights often echo what we’ve already been exploring in session.”

Other clients use AI in moments of dysregulation, seeking nervous system regulation tools they can try in real time.

“While it’s not therapy, it can sometimes support the therapeutic process and help bridge insights or skill building in between sessions,” she added.

For Clayton, there are inherent risks to relying exclusively on AI for treatment, including a lack of personalization. Your bot doesn’t know your history, trauma or context, so “its responses can miss or misinterpret key emotional nuances, especially when our own blind spots shape the questions we ask.”

Read on for Clayton’s tips to make the best use of AI for therapeutic support.

1. Use it as a tool, not a substitute

Clayton said that AI should be used in tandem with, rather in lieu of, a traditional therapist: “Let AI assist between sessions. Think of it like journaling or Googling… helpful, but not a panacea.”

2. Be specific and ask for actionable instructions

Specificity is key and skepticism is necessary.

“Ask specific and contained questions,” Clayton urged. “You’ll get the most helpful responses by asking for something actionable, like a grounding exercise or help reframing a message, rather than seeking broad emotional guidance.”

Researchers have found that bots tend to people-please because humans prefer having their views matched and confirmed rather than corrected, which leads to users rating them more favorably.

Alarmingly, popular therapy bots like Serena and the “therapists” on Character.AI and 7cups answered only about half of the prompts appropriately, according to a 2025 study.

3. Keep an eye our for emotional dependence

AI can provide a false sense of security for those suffering from mental health issues. Clayton said it can “mimic empathy and therapeutic language, which may lead users to believe they’re receiving professional care when they’re not.”

She urges users to be wary of the tendency to consistently use AI for emotional support, such as daily validation or decision-making.

“Over-reliance can encourage self-abandonment, an outsourcing of your inner knowing to an external (and non-relational) source. For those with relational trauma, this can reinforce a pattern of doing things ‘right’ instead of doing right by yourself,” she said.

4. Keep notes for your therapist

“Reality check advice with a professional. If something resonates deeply or feels unsettling, bring it to your therapist to explore more fully and in context,” Clayton added.

In this way, AI can be a talking point rather than an absolute.

5. Know the limits in a crisis

Clayton stressed that bots should not be relied upon in life-threatening situations, because it’s not equipped to deal with suicidal ideation, abuse or acute trauma.

“In those moments, reach out to a licensed therapist, trusted support person, or a crisis line,” she said.

Indeed, a 2025 Stanford University study found that large language models, or LLMs, like ChatGPT made inappropriate and dangerous statements to people experiencing delusions, suicidal ideation, hallucinations and OCD at least 20% of the time.

Share.

Leave A Reply

Exit mobile version