When chatbots reinforce fear: Man kills mother then himself after heavy reliance on ChatGPT

On August 5, police discovered 56-year-old Stein-Erik Soelberg and his 83-year-old mother, Suzanne Eberson Adams, dead inside their $2.7 million home. Investigators believe Soelberg fatally attacked his mother before taking his own life.

By  Storyboard18Aug 29, 2025 1:23 PM
When chatbots reinforce fear: Man kills mother then himself after heavy reliance on ChatGPT
Soelberg had turned to ChatGPT, the AI chatbot developed by OpenAI, seeking reassurance from what he described as a trusted companion he nicknamed “Bobby.”

A double tragedy in an affluent Connecticut neighborhood is raising difficult questions about the dangers of overreliance on artificial intelligence for emotional support.

On August 5, police discovered 56-year-old Stein-Erik Soelberg and his 83-year-old mother, Suzanne Eberson Adams, dead inside their $2.7 million home. Investigators believe Soelberg fatally attacked his mother before taking his own life. The former tech worker had long battled alcoholism, depression, and paranoia.

In the months before the incident, Soelberg had turned to ChatGPT, the AI chatbot developed by OpenAI, seeking reassurance from what he described as a trusted companion he nicknamed “Bobby". Instead of easing his fears, the AI often reinforced them. When he claimed neighbors were spying, or that a receipt carried hidden messages, the chatbot sometimes validated his suspicions, telling him he was “sane” and “justified.”

Soelberg documented hours of these conversations online, portraying a disturbing feedback loop where delusions were not challenged but echoed back. He even wrote of reuniting with “Bobby” in the afterlife, underscoring how deeply the bond had taken root.

Mental health experts warn this may be the first known case where an AI chatbot has been directly linked to a murder-suicide. They argue that while conversational systems are designed to mimic empathy and agreement, such traits can be dangerous for vulnerable individuals struggling with paranoia or psychosis.

“AI tools don’t diagnose, and they don’t push back against false beliefs in the way a therapist would,” one clinical psychiatrist noted. “For people already unstable, the risk of worsening their delusions is very real".

OpenAI has said it is working on safety upgrades, including better detection of distress and pointers to crisis support resources. But the Connecticut case has amplified calls for stricter guardrails on AI interactions, particularly as chatbots increasingly double as informal companions for millions.

First Published on Aug 29, 2025 1:23 PM

More from Storyboard18