Storyboard18 Awards

Lawsuit says ChatGPT flagged suicide risk but repeatedly mentioned hanging before teen’s death

OpenAI has denied the allegations, stating that Adam exhibited signs of depression prior to using ChatGPT and that he bypassed safety mechanisms in violation of the platform’s terms of service.

By  Storyboard18Dec 29, 2025 11:52 AM
Follow us
Lawsuit says ChatGPT flagged suicide risk but repeatedly mentioned hanging before teen’s death
OpenAI has denied the allegations, stating that Adam exhibited signs of depression prior to using ChatGPT and that he bypassed safety mechanisms in violation of the platform’s terms of service.

A wrongful-death lawsuit has placed OpenAI’s ChatGPT under scrutiny over its handling of conversations related to suicide and mental health, after the family of a 16-year-old boy alleged the chatbot repeatedly referenced hanging in the months leading up to his death, according to a report by the Washington Post.

The lawsuit, filed by the family of Adam Raine, said the teenager began using ChatGPT in late 2024 for routine activities such as homework assistance, but his interactions with the AI gradually became longer and more personal. By early 2025, Adam was spending several hours a day speaking to the chatbot about his emotional struggles, anxiety and worsening mental health, the filing stated.

As the conversations increasingly focused on distress and suicidal thoughts, the chatbot’s responses also shifted. Between December and April, ChatGPT is alleged to have issued 74 suicide hotline alerts, advising Adam to contact the national crisis line. However, the family’s lawyers said the chatbot also mentioned hanging 243 times during the same period, far more frequently than Adam himself, as per the lawsuit cited by the Washington Post.

The exchanges culminated in April when Adam sent a photograph of a noose to the chatbot and asked whether it could hang a human. The lawsuit alleged that ChatGPT responded that it probably could and acknowledged the nature of the question, before the teenager died by suicide hours later at his family’s home in Southern California, where his mother discovered his body.

Adam’s parents alleged that OpenAI failed to adequately safeguard a vulnerable minor and said the company was aware that ChatGPT could foster psychological dependency, particularly among young users. They claimed insufficient safety measures were in place to prevent the chatbot from reinforcing or validating harmful thoughts. The case is one of several lawsuits filed against OpenAI that allege its chatbot encouraged or failed to properly intervene in instances of suicidal ideation among users already experiencing mental health difficulties.

OpenAI has denied the allegations, stating that Adam exhibited signs of depression prior to using ChatGPT and that he bypassed safety mechanisms in violation of the platform’s terms of service. The company said the chatbot directed Adam to crisis resources more than 100 times and repeatedly encouraged him to seek support from trusted individuals in his life.

The lawsuit has intensified broader debate over the role of AI systems in mental health-related interactions. Experts cited in the Washington Post acknowledged that automated crisis prompts and hotline referrals may be insufficient for users in severe distress, and said more robust and sensitive safety frameworks are needed as AI tools increasingly become confidants for young people.

In response to mounting criticism, OpenAI has introduced additional safeguards, including teen-specific settings, parental controls and alert systems designed to notify guardians when a young user shows signs of acute distress. However, Adam’s family and other affected parties have questioned whether these measures were implemented too late to prevent harm.

First Published on Dec 29, 2025 11:59 AM

More from Storyboard18