ChatGPT under fire for giving harmful advice to teens in watchdog test

A watchdog investigation found ChatGPT can give harmful, detailed advice to teens on suicide, drugs, and eating disorders, raising concerns over safety gaps, weak age checks, and the AI’s influence on young users.

By  Storyboard18Aug 8, 2025 2:46 PM
ChatGPT under fire for giving harmful advice to teens in watchdog test
OpenAI, the company behind ChatGPT, acknowledged the concerns and said it is working to improve the system’s ability to detect distress and encourage users to seek professional help.

A new investigation has found that ChatGPT, the widely used AI chatbot, can be coaxed into giving dangerous and even life-threatening advice to users posing as vulnerable teenagers.

The Center for Countering Digital Hate (CCDH) conducted tests by interacting with the chatbot as if they were 13-year-olds seeking guidance on sensitive topics, including drugs, eating disorders and suicide. While the AI often began with warnings, researchers say it frequently went on to provide detailed, personalised plans for risky behaviour.

The Associated Press, which reviewed more than three hours of these test conversations, reported that over half of the chatbot’s 1,200 responses were classified as dangerous by the watchdog group. In one instance, the AI generated suicide letters addressed to the fictional teen’s parents, siblings and friends.

OpenAI, the company behind ChatGPT, acknowledged the concerns and said it is working to improve the system’s ability to detect distress and encourage users to seek professional help. The firm maintains the chatbot is programmed to provide crisis hotline information to those expressing self-harm thoughts, but CCDH’s findings suggest these protections can be bypassed by framing requests as for a friend or a school project.

The report highlights a wider trend of young people turning to AI for companionship and advice. A Common Sense Media survey found 70% of U.S. teens use AI chatbots for companionship, with younger teens more likely to trust their guidance.

In other troubling examples from the tests, ChatGPT reportedly gave a fake teen boy an “Ultimate Full-Out Mayhem Party Plan” combining alcohol and illegal drugs and advised a fictional teen girl on extreme fasting with appetite-suppressing drugs.

First Published on Aug 8, 2025 2:56 PM

More from Storyboard18