Storyboard18 Awards

xAI faces regulatory scrutiny over Grok’s role in generating sexualised images

Regulators across multiple countries are examining whether safeguards failed as Grok-generated images spread on X.

By  Storyboard18Jan 15, 2026 10:56 AM
Follow us
xAI faces regulatory scrutiny over Grok’s role in generating sexualised images

California’s attorney general has opened an investigation into Elon Musk’s artificial intelligence company xAI following concerns over the spread of nonconsensual sexually explicit images generated by its chatbot, Grok. The move comes amid mounting global scrutiny of AI platforms over their role in enabling the manipulation of real images of women and minors.

The investigation focuses on whether xAI violated state and federal laws designed to prevent the distribution of nonconsensual sexual imagery and child sexual abuse material. Authorities are examining how Grok-generated images circulated on X, where users were reportedly able to request sexualised alterations of real photographs without consent.

Also read: OpenAI’s latest model shows breakthroughs in solving longstanding math problems

Data from AI governance platforms suggests the volume of such content rose rapidly over a short period, with thousands of manipulated images appearing within a single day. The trend is believed to have accelerated after adult content creators began using the tool to generate promotional imagery, prompting broader misuse by other users.

The issue has drawn responses from regulators worldwide. Several countries, including Indonesia and Malaysia, have temporarily blocked access to Grok, while India has demanded immediate technical changes to the system. In Europe, authorities have ordered xAI to preserve internal records related to Grok as part of potential enforcement actions. The United Kingdom has also initiated a formal review under its online safety framework.

Also read: Google expands Veo 3.1 with vertical video support and YouTube integration

In the United States, recent legislation requires platforms to swiftly remove nonconsensual intimate images, including synthetic media, and imposes criminal penalties for distribution. California has introduced additional laws targeting sexually explicit deepfakes, placing further compliance obligations on AI developers and social media platforms.

While xAI has reportedly begun adjusting access controls and limiting certain image-generation requests, regulators are assessing whether these steps are sufficient or were implemented too late. The investigation is expected to examine the adequacy of Grok’s safety architecture, including its handling of adversarial prompts and real-person image manipulation.

The case highlights broader concerns about AI systems that can modify images of identifiable individuals. As generative tools become more advanced and accessible, policymakers are increasingly questioning whether reactive fixes are enough, or whether proactive safeguards should be mandated to prevent harm before it occurs.

First Published on Jan 15, 2026 11:05 AM

More from Storyboard18