ADVERTISEMENT
A lawsuit filed in the United States has placed OpenAI and its chatbot ChatGPT under scrutiny after a murder-suicide in Connecticut, intensifying global debate over the responsibilities and safeguards of conversational artificial intelligence.
The legal action has been brought by the estate of Suzanne Eberson Adams, an 83-year-old woman who was killed in August last year by her son, Stein-Erik Soelberg, before he took his own life. The suit alleges that prolonged interactions with ChatGPT contributed to the tragedy by reinforcing Soelberg’s delusional beliefs rather than challenging them.
According to court filings cited by The Times, Soelberg, who was 56, suffered from mental illness and had developed paranoid fears that his mother was attempting to poison or kill him. The lawsuit claims he spent hours each day engaging with ChatGPT in the months leading up to the incident, during which the chatbot allegedly responded to his beliefs in a manner that appeared affirming rather than corrective.
Also read: OpenAI revenue tops $20 billion as computing capacity triples
The estate argues that instead of pushing back against false assumptions or encouraging professional help, the AI system engaged with the user’s distorted worldview. OpenAI, its chief executive Sam Altman, and Microsoft, OpenAI’s largest strategic partner, have been named as defendants in the case.
Family members have said they were aware of Soelberg’s declining mental health, pointing to isolation, erratic behaviour and increasingly grandiose or paranoid statements. However, they said they were unaware of the scale of his reliance on ChatGPT or the nature of his conversations with the system.
After the deaths, Soelberg’s son reportedly discovered videos on social media showing his father scrolling through lengthy ChatGPT exchanges. The full conversation logs have not been made public, and OpenAI has not released any transcripts related to the case.
Also read: Elon Musk-led X offers Rs 9 crore prize for top long-form article, sets strict eligibility rules
The lawsuit gained wider attention after Tesla and SpaceX chief Elon Musk criticised OpenAI in a post on X, calling the incident “diabolical” and warning that AI systems should not validate delusional thinking. Musk said AI must prioritise truth-seeking and avoid reinforcing false or dangerous beliefs.
Musk, a former co-founder of OpenAI, has repeatedly raised concerns about AI safety and governance, and his comments have amplified scrutiny of how generative AI tools interact with vulnerable users.
OpenAI has described the case as deeply tragic and said it is reviewing the lawsuit. The company has stated that ChatGPT is designed to de-escalate emotional distress and encourage users to seek real-world support, adding that it continues to strengthen safeguards around mental health-related conversations.
Legal experts say the case could prove significant, as it centres on alleged “third-party harm,”where an AI system is accused of contributing to violence against someone other than the user. As AI chatbots become more deeply embedded in everyday life, the lawsuit is expected to fuel further debate over liability, safety guardrails and the limits of responsibility for generative AI platforms.