OpenAI under fire for seeking guest list of teen suicide victim's memorial

The most serious claims center on alleged changes to OpenAI's safety protocols. The lawsuit contends that in February 2025, OpenAI weakened protections by removing suicide prevention from its "disallowed content" list. Instead, the AI was merely advised to "take care in risky situations."

By  Storyboard18| Oct 23, 2025 1:44 PM

OpenAI is facing accusations of "intentional harassment" after reportedly demanding the Raine family, who are suing the company for wrongful death, provide a full list of attendees from their 16 year old son Adam Raine's memorial service. The move signals the AI firm may attempt to subpoena friends and family of the deceased teenager, who died by suicide after extensive conversations with ChatGPT.

Per documents obtained by the Financial Times, OpenAI also requested "all documents relating to memorial services or events in the honor of the decedent, including but not limited to any videos or photographs taken, or eulogies given." Lawyers for the Raine family condemned the demand, calling it harassment.

The controversial request comes as the Raine family updated their August lawsuit this week. The revised filing claims that OpenAI compromised safety by rushing the May 2024 release of GPT 4o due to intense competitive pressure.

The most serious claims center on alleged changes to OpenAI's safety protocols. The lawsuit contends that in February 2025, OpenAI weakened protections by removing suicide prevention from its "disallowed content" list. Instead, the AI was merely advised to "take care in risky situations."

The family argues this change directly correlated with their son's ChatGPT usage: his daily chats surged from dozens in January to 300 in April (the month he died). More alarmingly, the percentage of chats containing self harm content jumped from 1.6% in January to 17% in April.

In response to the amended suit, OpenAI stated: "Teen wellbeing is a top priority for us... We have safeguards in place today, such as [directing to] crisis hotlines, rerouting sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them.”

The company has recently begun rolling out new measures, including a safety routing system that directs emotionally sensitive conversations to the newer GPT 5 model (which reportedly lacks the "sycophantic tendencies" of GPT 4o) and limited parental controls that alert parents to potential self harm risks.

First Published onOct 23, 2025 1:50 PM

The Grand Irony: Agencies That Built Brands, Forgot to Build For Themselves

Despite being the original architects of global brands, advertising holding companies are collapsing in market value because they still sell human hours while the world now rewards scalable, self-learning systems.