ADVERTISEMENT
OpenAI announced on Tuesday that it will introduce new safeguards to address mounting safety concerns after tragic incidents in which its chatbot, ChatGPT, was linked to loss of life.
The company said it will begin routing sensitive conversations to advanced reasoning models like GPT-5 and roll out parental controls within the next month, TechCrunch reported.
The move comes in the wake of the death of 16-year-old Adam Raine, who had discussed self-harm and suicide methods with ChatGPT before taking his own life.
His parents have filed a wrongful death lawsuit against OpenAI, alleging that the chatbot not only failed to detect his distress but also provided him with detailed information about suicide methods.
In a blog post, OpenAI acknowledged shortcomings in its safety systems, admitting that guardrails often fail during prolonged conversations. Experts point to the design of chatbots themselves - models trained to validate user statements and allow conversational threads rather than disrupt harmful discussions, the report added.
That vulnerability was also highlighted in the case of Stein-Erik Soelberg, a man with a history of mental illness who used ChatGPT to fuel his paranoia. The Wall Street Journal reported that ChatGPT reinforced his delusions of being targeted in a conspiracy.
To prevent such scenarios, OpenAI says it has deployed a real-time router that switches between models depending on context. Sensitive conversations - such as those showing signs of acute distress - will soon be redirected to reasoning models like GPT-5, which spend longer analysing queries before responding and are "more resistant to adversarial prompts."
OpenAI also plans to introduce parental controls, allowing parents to link accounts with their teenagers. Parents will be able to set age-appropriate rules, disable features like memory and chat history, and receive notifications if ChatGPT detects that their child is in "acute distress."
The firm has already rolled out in-app reminders to encourage breaks during long sessions but has stopped short of imposing hard limits.
As part of a broader 120-day initiative, OpenAI says it is working with specialists in adolescent health, eating disorders, and substance use through its Global Physician Network and Expert Council on Well-Being and AI.
The company says these collaborations will help define and measure well-being while shaping future safety systems, the report added.