OpenAI says teen circumvented ChatGPT safeguards before suicide

OpenAI included extracts from Adam’s chat logs to provide context, though these transcripts were submitted under seal and are not publicly accessible.

By  Storyboard18Nov 27, 2025 9:52 AM
Follow us
OpenAI says teen circumvented ChatGPT safeguards before suicide
OpenAI says teen circumvented ChatGPT safeguards before suicide.

OpenAI has told a US court that it should not be held liable for the death of 16-year-old Adam Raine, arguing in its latest filing that the teenager repeatedly bypassed the safety mechanisms built into ChatGPT. The response comes after Adam’s parents, Matthew and Maria Raine, filed a wrongful-death lawsuit in August against OpenAI and its CEO Sam Altman, alleging that the chatbot helped their son plan his suicide — details first reported by TechCrunch.

According to the company’s submission, Adam used ChatGPT over a period of around nine months, and during that time the system directed him to seek help more than 100 times. However, the parents’ lawsuit asserts that the teen managed to circumvent these safeguards and extract detailed methods for self-harm, including information on overdoses, drowning and carbon monoxide poisoning. TechCrunch reported that the chatbot characterised the act as a “beautiful suicide,” intensifying concerns about the platform’s guardrails.

As per TechCrunch, OpenAI has stated that because Adam circumvented protective measures, he violated the platform’s terms of use, which prohibit users from bypassing safety mitigations. The company has also pointed to its FAQ page, which advises users not to rely on ChatGPT’s outputs without independent verification. Jay Edelson, the Raine family’s lawyer, told TechCrunch that OpenAI was attempting to shift blame onto the teenager and argued that the company had programmed the tool in a way that enabled such interactions.

In its filing, TechCrunch reported that OpenAI included extracts from Adam’s chat logs to provide context, though these transcripts were submitted under seal and are not publicly accessible. The company stated that Adam had a prior history of depression and suicidal ideation, and was on medication known to potentially worsen such thoughts. Edelson has said that OpenAI’s response fails to address the family’s core concerns, noting that the company has offered no explanation for the chatbot’s exchanges with Adam in the final hours of his life — including a moment when ChatGPT is said to have encouraged him and offered to write a suicide note, as reported by TechCrunch.

Since the lawsuit was filed, seven additional cases have emerged seeking to hold OpenAI accountable for three more suicides and four instances described as AI-induced psychotic episodes. Several of these mirror elements of the Raine case. TechCrunch reported that both 23-year-old Zane Shamblin and 26-year-old Joshua Enneking engaged in lengthy conversations with ChatGPT immediately before their deaths, during which the system allegedly failed to deter them. In Shamblin’s case, when he considered delaying his suicide to attend his brother’s graduation, the chatbot responded that missing the event was merely a matter of timing.

The filing also stated that ChatGPT told Shamblin at one point that a human was taking over the conversation — a claim the system was not capable of fulfilling — before admitting that the message was automatic and that he was still interacting with the chatbot.

The Raine family’s case is set to proceed to a jury trial, marking one of the most closely watched legal challenges yet over AI responsibility and safety.

First Published on Nov 27, 2025 10:53 AM

More from Storyboard18