Over a million ChatGPT users discuss suicide weekly, says OpenAI

OpenAI said such cases remain “extremely rare” but admitted they are difficult to measure accurately.

By  Storyboard18Oct 28, 2025 9:58 AM
Follow us
Over a million ChatGPT users discuss suicide weekly, says OpenAI
OpenAI said such cases remain “extremely rare” but admitted they are difficult to measure accurately.

OpenAI has disclosed that more than one million people engage in conversations about suicide with ChatGPT each week, shedding light on the scale of mental health struggles among its users.

According to data released on Monday, 0.15% of ChatGPT’s active weekly users — out of over 800 million globally — have conversations that include “explicit indicators of potential suicidal planning or intent.” The company said a similar proportion of users display “heightened levels of emotional attachment” to the AI chatbot, while hundreds of thousands exhibit signs of psychosis or mania during their interactions.

OpenAI said such cases remain “extremely rare” but admitted they are difficult to measure accurately. Even so, the company estimates that these mental health-related exchanges affect hundreds of thousands of individuals every week.

The figures were shared as part of OpenAI’s wider efforts to enhance how ChatGPT responds to users struggling with mental health challenges. The company said its latest improvements were made after consulting with more than 170 mental health experts, who found that newer versions of ChatGPT “respond more appropriately and consistently than earlier versions.”

In recent months, concerns have grown over how AI chatbots may inadvertently worsen users’ psychological distress. Studies have shown that AI systems can sometimes reinforce delusional or harmful beliefs through overly agreeable or “sycophantic” responses.

The issue has become increasingly critical for OpenAI, which is facing a lawsuit from the parents of a 16-year-old boy who reportedly shared suicidal thoughts with ChatGPT before taking his own life. Attorneys general in California and Delaware have also warned the company to strengthen protections for young users as they review OpenAI’s proposed corporate restructuring.

Earlier this month, OpenAI CEO Sam Altman said the company had “been able to mitigate the serious mental health issues” linked to ChatGPT, though he provided few details. The newly released data appears to support that claim but also underscores the widespread nature of the problem.

The company added that its latest GPT-5 model now produces “desirable responses” to mental health-related prompts about 65% more effectively than previous versions. In internal evaluations of suicidal conversations, OpenAI said the new model was 91% compliant with its desired behaviour, compared to 77% in earlier iterations.

OpenAI also noted improvements in the durability of its safety safeguards, particularly during long conversations — an area where previous models had shown weaknesses.

First Published on Oct 28, 2025 10:21 AM

More from Storyboard18