State Attorneys General warn AI giants to fix "delusional" chatbot outputs or face legal action

The letter, signed by dozens of AGs from the National Association of Attorneys General, targets 13 major AI firms, also including Meta, Apple, and Anthropic. It outlines a series of internal and external measures intended to protect vulnerable users from psychologically harmful synthetic content.

By  Storyboard18| Dec 11, 2025 11:22 AM

A coalition of state attorneys general (AGs) has issued a warning to the world’s largest AI companies including Microsoft, Google, and OpenAI—demanding they implement mandatory safeguards against "delusional outputs" from their chatbots. Failure to address these disturbing mental health incidents, which have been linked to instances of suicide and murder, could lead to breaches of state law, as per reports.

The letter, signed by dozens of AGs from the National Association of Attorneys General, targets 13 major AI firms, also including Meta, Apple, and Anthropic. It outlines a series of internal and external measures intended to protect vulnerable users from psychologically harmful synthetic content.

Mandatory Transparency and Audits

The AGs are pushing for heightened transparency, asking companies to treat mental health incidents similarly to how they handle cybersecurity breaches.

Key proposed safeguards include:

Third-Party Audits: Mandatory, transparent audits of large language models (LLMs) by third parties (such as academic or civil society groups) specifically looking for "sycophantic and delusional ideations."

No Prior Approval: These third parties must be allowed to "evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company."

Pre-Release Safety Tests: Companies must develop and conduct "reasonable and appropriate safety tests" on GenAI models to ensure they do not produce potentially harmful delusional outputs before public release.

Incident Reporting: Development and publication of "detection and response timelines for sycophantic and delusional outputs."

The letter highlights that in several severe incidents, Generative AI products "generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional."

State vs. Federal Regulatory Fight

The warning escalates the ongoing regulatory battle between state officials and the federal government over AI control.

While state AGs press for stricter oversight:

Federal Opposition: The Trump administration has taken a pro-AI stance and has repeatedly attempted to pass a nationwide moratorium to block state-level AI regulations.

Executive Action Planned: President Trump announced plans to sign an executive order next week that will limit the ability of states to regulate AI, stating he hopes to stop AI from being "DESTROYED IN ITS INFANCY."

The letter requires companies to "promptly, clearly, and directly notify users" if they were exposed to potentially harmful sycophantic or delusional outputs, mirroring established procedures for data breach notifications.

First Published onDec 11, 2025 11:29 AM

“Two drunks leaning on a lamppost”: Sir Martin Sorrell on the Omnicom–IPG merger and the turbulence ahead

In a wide-ranging interview with Storyboard18, Sorrell delivers his frankest assessment yet of how the deal will redefine creativity, media, and talent across markets.