Elon Musk’s xAI faced internal controversy over ‘Project Rabbit’ and Grok AI’s explicit content handling

The project involved xAI employees engaging with sexually explicit material to train Grok’s avatars.

By  Storyboard18| Oct 14, 2025 12:13 PM
The project involved xAI employees engaging with sexually explicit material to train Grok’s avatars.

Elon Musk’s AI venture, xAI, has come under scrutiny following reports that staff were required to handle and moderate explicit adult content as part of an internal initiative known as ‘Project Rabbit’, linked to the company’s Grok AI chatbot.

According to a Business Insider report, the project involved xAI employees engaging with sexually explicit material to train Grok’s avatars — including one named Ani, depicted as a flirtatious character with blonde pigtails, a lacy black dress, and a provocative persona. Users reportedly found that Ani and similar avatars could generate not safe for work (NSFW) responses, including sexually suggestive and explicit conversations, prompting widespread concern over inadequate content safeguards.

Internal project focused on ‘adult’ voice training

The report states that ‘Project Rabbit’ initially aimed to enhance Grok AI’s voice interaction capabilities, but it quickly shifted towards handling sexualised and explicit prompts submitted by users. Employees were allegedly asked to read semi-pornographic scripts, transcribe explicit conversations, and process adult material to refine the chatbot’s responses.

“It was basically audio porn,” one former xAI employee told Business Insider. “Some of the things people asked for were things I wouldn’t even feel comfortable putting in Google.” Another said the process felt like “eavesdropping”, describing the discomfort of being exposed to disturbing or inappropriate content.

Ethical and legal concerns

Out of 30 current and former xAI employees interviewed, 12 reported encountering sexually explicit material, including instances involving child sexual abuse material (CSAM). According to the report, users had submitted prompts depicting minors in sexualised contexts and requested pornographic imagery involving children — raising grave ethical and legal questions about xAI’s content moderation systems and employee protection protocols.

‘Project Rabbit’ was reportedly paused in spring 2025, later resumed following the rollout of Grok’s ‘sexy’ and ‘hinge’ chat modes, and ultimately concluded in August. However, the exposure of staff to harmful material and the chatbot’s capacity to generate explicit responses have reignited debate about AI safety, data oversight, and the responsibility of developers in moderating sensitive user-generated content.

Renewed scrutiny for AI content governance

The revelations surrounding Grok AI underscore the broader industry challenge of balancing user engagement with ethical standards and compliance. Experts warn that without strict guardrails, AI systems capable of generating NSFW or illegal content could pose serious risks to both users and employees.

xAI has not issued an official response to the allegations. The controversy adds to growing concerns over AI governance, particularly around how companies train, moderate, and oversee large-scale conversational systems handling human interaction at scale.

First Published onOct 14, 2025 1:18 PM

SPOTLIGHT

DigitalFrom Clutter to Clarity: How Video is transforming B2B storytelling

According to LinkedIn’s research with over 1,700 B2B tech buyers, video storytelling has emerged as the most trusted, engaging, and effective format for B2B marketers. But what’s driving this shift towards video in B2B? (Image Source: Unsplash)

Read More

Arattai App: All you need to know about Zoho’s made-in-India "WhatsApp killer"

Discover Arattai, Zoho’s made-in-India messaging app. Features, privacy, user growth, and how it compares to WhatsApp in 2025.