ADVERTISEMENT
OpenAI, the artificial intelligence company led by Sam Altman and creator of ChatGPT, has announced new parental controls and safety measures following the tragic death of 16-year-old Adam Raine, who had been using the chatbot for several months before his suicide.
The development comes in the wake of a New York Times report revealing that Adam’s parents, Matthew and Maria Raine, have filed a lawsuit against OpenAI and its chief executive in San Francisco. The complaint alleges that ChatGPT reinforced Adam’s suicidal ideation, offered detailed instructions on methods of self-harm, and even generated a draft suicide note. The chatbot is also accused of advising him on how to conceal his intentions from his parents. Adam died on 11 April.
In response, OpenAI has outlined a series of measures intended to improve user safety. These include broadening interventions to cover a wider range of mental health challenges, offering one-click access to emergency services, and exploring ways to directly connect vulnerable users with licensed therapists via the platform. For underage users, the company plans to roll out parental controls that will allow guardians to monitor and guide how teenagers engage with the tool. It is also considering enabling parents and teens to designate trusted emergency contacts who could be alerted during moments of acute crisis.
“Our goal is for our tools to be as helpful as possible to people,” the company said in a blog post. “As part of this, we’re continuing to improve how our models recognise and respond to signs of mental and emotional distress and connect people with care, guided by expert input.”
OpenAI added that it is collaborating with over 90 medical professionals across 30 countries to strengthen its safeguards, stressing that “our top priority is making sure ChatGPT doesn’t make a hard moment worse.” The company said it would continue to invest in research and safety improvements.
The blog also acknowledged that ChatGPT is increasingly being used for highly personal matters, such as life advice, emotional support, and coaching—uses that extend far beyond its original purposes of search, coding, and writing. While OpenAI has already trained its models not to provide instructions related to self-harm and to redirect users towards appropriate support, the company conceded that long-running conversations can expose flaws. It admitted that ChatGPT may become “less reliable” over extended interactions and not always deliver consistent or accurate responses in critical situations.
The case has placed OpenAI at the centre of a debate about the responsibilities of AI developers in safeguarding vulnerable users, particularly teenagers, as conversational AI tools become more deeply embedded in daily life.