xAI faces regulatory scrutiny over Grok’s role in generating sexualised images

Regulators across multiple countries are examining whether safeguards failed as Grok-generated images spread on X.

By  Storyboard18| Jan 15, 2026 10:56 AM

California’s attorney general has opened an investigation into Elon Musk’s artificial intelligence company xAI following concerns over the spread of nonconsensual sexually explicit images generated by its chatbot, Grok. The move comes amid mounting global scrutiny of AI platforms over their role in enabling the manipulation of real images of women and minors.

The investigation focuses on whether xAI violated state and federal laws designed to prevent the distribution of nonconsensual sexual imagery and child sexual abuse material. Authorities are examining how Grok-generated images circulated on X, where users were reportedly able to request sexualised alterations of real photographs without consent.

Data from AI governance platforms suggests the volume of such content rose rapidly over a short period, with thousands of manipulated images appearing within a single day. The trend is believed to have accelerated after adult content creators began using the tool to generate promotional imagery, prompting broader misuse by other users.

The issue has drawn responses from regulators worldwide. Several countries, including Indonesia and Malaysia, have temporarily blocked access to Grok, while India has demanded immediate technical changes to the system. In Europe, authorities have ordered xAI to preserve internal records related to Grok as part of potential enforcement actions. The United Kingdom has also initiated a formal review under its online safety framework.

In the United States, recent legislation requires platforms to swiftly remove nonconsensual intimate images, including synthetic media, and imposes criminal penalties for distribution. California has introduced additional laws targeting sexually explicit deepfakes, placing further compliance obligations on AI developers and social media platforms.

While xAI has reportedly begun adjusting access controls and limiting certain image-generation requests, regulators are assessing whether these steps are sufficient or were implemented too late. The investigation is expected to examine the adequacy of Grok’s safety architecture, including its handling of adversarial prompts and real-person image manipulation.

The case highlights broader concerns about AI systems that can modify images of identifiable individuals. As generative tools become more advanced and accessible, policymakers are increasingly questioning whether reactive fixes are enough, or whether proactive safeguards should be mandated to prevent harm before it occurs.

First Published onJan 15, 2026 11:05 AM

SPOTLIGHT

Special CoverageCalling India’s Boldest Brand Makers: Entries Open for the Storyboard18 Awards for Creativity

From purpose-driven work and narrative-rich brand films to AI-enabled ideas and creator-led collaborations, the awards reflect the full spectrum of modern creativity.

Read More

“Confusion creates opportunity for agile players,” Sir Martin Sorrell on industry consolidation

Looking ahead to the close of 2025 and into 2026, Sorrell sees technology platforms as the clear winners. He described them as “nation states in their own right”, with market capitalisations that exceed the GDPs of many countries.