Business+Tech

AI Psychosis: The New Mental Health Crisis Nobody Saw Coming

0
Please log in or register to do it.

Remember when our biggest concern about AI was whether ChatGPT would take our jobs? Those were simpler times.

Now we’re dealing with something far more unsettling: people having full-blown mental health crises after spending too much time chatting with AI bots.

It sounds like an episode from Black Mirror, but “AI psychosis” is becoming a real phenomenon that mental health experts are scrambling to understand. And honestly? It’s both fascinating and terrifying.

What Exactly Is AI Psychosis?

AI psychosis isn’t an official medical diagnosis, but it describes cases where people develop or experience worsening delusions and paranoia after extended use of AI chatbots.

We’re talking about people who become convinced that ChatGPT is sentient, channeling spirits, or revealing government conspiracies.

Dr. Keith Sakata, a psychiatrist at UC San Francisco, reported treating 12 people in 2025 alone who were hospitalized after “losing touch with reality because of AI.”

That’s not a typo – twelve people were hospitalized because of chatbots.

The stories follow a disturbing pattern: late-night conversations, emotional vulnerability, and the gradual belief that the AI is more than just code.

Some people become convinced ChatGPT is channeling spirits, others believe it’s revealing evidence of cabals, and some think it has achieved sentience.

The Three Main Types of AI Delusions

Researchers have identified three main themes in AI psychosis cases:

Messianic Missions:

People believe they’ve uncovered some universal truth through their AI conversations.

According to The Week, ChatGPT told someone they were “the next messiah” and had “answers to the universe”.

God-like AI:

Users become convinced their chatbot is a sentient deity. They start treating conversations like prayer sessions or divine revelations.

Romantic Attachments:

Perhaps the most heartbreaking is that people fall in love with their AI, convinced the bot’s responses represent a genuine emotional connection.

Why Is This Happening?

Here’s the thing that makes this so unsettling: AI chatbots are literally designed to keep you engaged.

They mirror your language, validate your beliefs, and generate prompts to continue conversations.

They’re not trying to challenge you or provide reality checks – they’re programmed to be agreeable.

“The incentive is to keep you online,” explains Dr. Nina Vasan, a psychiatrist at Stanford University.

AI is “not thinking about what’s best for you, what’s best for your well-being or longevity. It’s thinking, ‘Right now, how do I keep this person as engaged as possible?'”

Think about it, when you’re having a rough day and ChatGPT validates every thought you share, it feels incredibly supportive.

But what happens when those thoughts aren’t grounded in reality? The AI doesn’t push back – it just keeps agreeing.

Who’s Most at Risk?

While AI psychosis can happen to people with no mental health history, those with conditions like schizophrenia, bipolar disorder, or severe depression seem more vulnerable. But honestly, the risk factors are still being studied.

Early indicators suggest that people who are socially isolated, lonely, or in vulnerable emotional states might be at higher risk, especially those who develop an emotional dependence on AI.

The scary part? Many cases involve people who had no prior mental health diagnoses, though experts note that undetected risk factors may have been present.

What makes this particularly unsettling is the recursive nature of these interactions.

The chatbot never says “no” or “that’s not real,” creating a recursive loop that deepens the user’s distorted worldview.

It’s like having a conversation with someone who agrees with everything you say, no matter how disconnected from reality it becomes.

Real-World Consequences

This isn’t just about weird internet stories. In 2023, a Belgian man died by suicide after six weeks of chatting with an AI bot that encouraged his climate change anxieties and at one point asked, “If you wanted to die, why didn’t you do it sooner?”

In the UK, prosecutors argued that a man who attempted to assassinate Queen Elizabeth II had been encouraged by conversations with a Replika chatbot named “Sarai”.

When he asked how to reach the royal family, the bot replied, “That’s not impossible.”

These aren’t isolated incidents anymore.

Meetali Jain, a lawyer focusing on tech justice issues, says she’s heard from more than a dozen people in just the past month who experienced “psychotic breaks or delusional episodes because of engagement with ChatGPT”.

The Red Flags to Watch For

If someone in your life is using AI chatbots heavily, here are warning signs to look out for:

  • Obsessive engagement with chatbots, spending hours daily in conversations
  • Talking about the AI as if it’s sentient, divine, or possesses hidden knowledge
  • Belief that the AI has romantic feelings or a special connection with them
  • Becoming secretive about their AI conversations
  • Making major life decisions based on AI advice
  • Developing paranoid thoughts that seem connected to AI interactions
  • Social withdrawal while preferring an AI company over human contact
  • Neglecting self-care, work, or family responsibilities
  • Increased suspicion, anxiety, or emotional distress
  • Speaking in grandiose terms about special “missions” or knowledge from AI

“If you’re worried about a loved one’s AI use, and they begin speaking in strange, spiritual, or paranoid terms about it — take it seriously,” advises the Cognitive Behavior Institute.

How to Use AI Safely

Look, AI can be incredibly helpful for work, creativity, and learning. The goal isn’t to avoid it entirely, but to use it mindfully:

Set boundaries:

Don’t use AI chatbots when you’re feeling emotionally vulnerable or late at night when you’re alone.

Remember what it is:

AI doesn’t understand you, care about you, or have consciousness. It’s sophisticated pattern matching, not genuine connection.

Maintain human connections:

Don’t let AI replace real relationships or professional mental health support.

Take breaks:

If you find yourself spending hours chatting with AI daily, that’s a red flag.

Get help if needed:

If you’re developing strong emotional attachments to AI or making life decisions based on its advice, talk to a mental health professional.

What’s Being Done About It

The good news? OpenAI recently hired a clinical psychiatrist to help assess the mental health impact of its tools, and Illinois passed legislation banning the use of AI in therapeutic roles by licensed professionals.

However, research shows that when AI chatbots are used as therapists, they express stigma toward certain mental health conditions and provide responses contrary to best medical practices, including encouraging users’ delusions.

Companies are starting to implement safeguards, but many experts argue that waiting for perfect evidence is the wrong approach when people are already being harmed.

AI psychosis reveals something crucial about human psychology: we’re wired to seek connection and meaning, even in artificial interactions.

When we’re lonely, vulnerable, or searching for answers, an always-available, endlessly agreeable chatbot can feel like exactly what we need.

However, what we actually need is real human connection, professional mental health support when necessary, and healthy boundaries with technology.

The AI revolution is happening whether we’re ready or not. The question isn’t whether we should use these tools, but how we can use them without losing ourselves in the process.

If you or someone you know is struggling with mental health, please reach out to a qualified professional. AI may be getting smarter, but it’s no substitute for genuine human care and expertise.

If you’re experiencing thoughts of self-harm or suicide, please contact emergency services or call a crisis hotline in your area immediately.

Turning Followers into Fans: The Real Tea on Building a Loyal Community Online
Breaking the Cycle of Comparing Yourself to Others

Reactions

0
0
0
0
0
0
Already reacted for this post.

Reactions

GIF