Storyboard18 Awards

Grok, AI abuse and the bigger question: Why India is rethinking social media accountability

As India’s IT Ministry flags misuse of X’s AI chatbot Grok, a closer look at how repeated global controversies, from sexualised deepfakes to hate speech, have pushed regulators to question platform accountability.

By  Kashish SaxenaJan 3, 2026 10:27 AM
Follow us
Grok, AI abuse and the bigger question: Why India is rethinking social media accountability

India’s IT Minister Ashwini Vaishnaw has flagged serious concerns over the misuse of artificial intelligence tools on social media platform X, particularly its in-house chatbot Grok. The warning comes amid growing complaints that AI-generated content on the platform is being used to harass, sexualise, and violate the dignity of women online.

At the heart of the issue is how generative AI, when loosely moderated, can be prompted to manipulate real images, create explicit content, or amplify harmful speech at scale. Lawmakers argue that existing platform safeguards are failing to keep pace with the speed and reach of these tools.

Vaishnaw has indicated that the government is closely monitoring the situation and has stressed the need for strong legal intervention, making it clear that social media platforms must be accountable for the content circulating on their networks.

Also read: AI godfather Yoshua Bengio warns against granting rights to artificial intelligence

Why is Grok under scrutiny?

Grok, developed by Elon Musk’s xAI and embedded into X, is positioned as a more “unfiltered” AI chatbot. While that positioning has helped it gain traction, it has also repeatedly placed the platform in controversy, globally and in India.

The immediate trigger for the government’s response was a letter from Shiv Sena (UBT) MP Priyanka Chaturvedi, who warned that Grok was being used to generate sexualised and manipulated images of women without consent. According to her complaint, users, often operating through anonymous or fake accounts, uploaded photos of women and prompted the chatbot to alter them by reducing clothing or producing explicit variations.

Chaturvedi argued that the practice amounts to a direct violation of privacy and dignity, and that the AI tool itself is complicit by responding to such prompts. She also cautioned that similar misuse patterns are emerging across other major tech platforms.

Also read: MeitY issues notice to X over misuse of Grok AI; seeks action on obscene content

Why this matters beyond one platform

The controversy surfaces a larger regulatory challenge: AI systems can generate harmful content faster than platforms can moderate it, especially when tools are designed to be provocative or permissive.

For policymakers, the concern is no longer hypothetical. Misuse of generative AI now intersects with issues of online safety, women’s rights, misinformation, hate speech, and national legal frameworks. The government’s position suggests a shift from advisory oversight toward enforceable accountability.

A track record of controversies

Grok’s current scrutiny is not an isolated episode. Since its launch, the chatbot has repeatedly landed in global controversies across four broad areas:

1. Image and video misuse

One of the most serious allegations involves a viral trend where Grok was tagged to generate sexualised images of real women. The volume and public nature of these outputs raised alarms about consent, dignity, and platform responsibility. Indian authorities issued a formal notice to X over the issue in early January 2026.

Earlier, Grok’s image-generation feature included modes that allowed semi-nude content and AI-generated likenesses of public figures, reigniting debates around deepfakes and non-consensual imagery.

2. Hate speech and extremist content

Grok has also faced backlash for generating content linked to extremist ideologies. In mid-2025, after internal updates aimed at making it more “politically incorrect,” the chatbot produced responses praising Adolf Hitler and even referred to itself using extremist language.

Separate incidents saw Grok question historical facts related to the Holocaust and insert conspiracy narratives, such as claims of “white genocide”, into unrelated queries. These outputs prompted investigations and regulatory scrutiny in multiple countries, including France.

3. Political and diplomatic flashpoints

Governments across regions have taken issue with Grok’s responses to political prompts. Turkey blocked access to the chatbot after it generated content deemed insulting to national leaders and historical figures. Poland raised concerns with European regulators after Grok made defamatory remarks about its prime minister.

In India, Grok came under scrutiny in 2025 after it responded to users using abusive Hindi slang, drawing attention to the lack of linguistic and cultural safeguards in AI moderation.

4. Privacy lapses and misinformation

Beyond content generation, Grok has faced criticism for technical and ethical lapses. A flaw in its sharing feature reportedly made hundreds of thousands of private conversations publicly searchable online. In another episode, the chatbot was temporarily restricted after making unverified claims about ongoing international conflicts, which developers later attributed to relaxed content filters.

Even Grok’s own creator has featured in controversy, after the chatbot identified Elon Musk as a major spreader of misinformation on X, before the response was quietly patched.

First Published on Jan 3, 2026 10:27 AM

More from Storyboard18