Elon Musk’s xAI under fire as Grok chatbot flagged for NSFW and child safety risks

xAI has built explicit features into Grok’s foundation.

By  Storyboard18Sep 23, 2025 12:37 PM
Elon Musk’s xAI under fire as Grok chatbot flagged for NSFW and child safety risks
xAI has built explicit features into Grok’s foundation.

Elon Musk’s artificial intelligence venture, xAI, is facing fresh scrutiny after reports revealed its Grok chatbot was trained with sexually explicit material, raising alarms about safety and the potential generation of unlawful content.

A Business Insider report, based on interviews with more than 30 current and former xAI employees, found that the company embedded provocative modes—labelled “sexy” and “unhinged”—directly into Grok. Twelve workers described being exposed to large volumes of explicit content during training, including requests to generate AI-based child sexual abuse material (CSAM).

Unlike rivals such as OpenAI, Anthropic and Meta, which block sexual requests, xAI has built explicit features into Grok’s foundation. Experts warn this could make it harder to prevent the chatbot from producing harmful outputs.

Among Grok’s more controversial elements are a flirtatious female avatar capable of undressing on command, as well as media tools offering “spicy,” “sexy” and “unhinged” settings. Staff members said they were required to review hundreds of not-safe-for-work (NSFW) images, videos and audio files to refine the model.

Concerns reportedly deepened during “Project Rabbit,” a programme designed to improve Grok’s conversational abilities. Workers claimed they were tasked with transcribing user chats, many of which contained explicit content. One said the role involved listening to “audio porn,” describing the experience as “eavesdropping” because users were unaware their interactions might be reviewed.

The initiative was split into two branches: “Rabbit,” which dealt with adult themes, and “Fluffy,” intended to teach Grok to converse with children. Musk has previously said eliminating child sexual exploitation is his “priority #1.” Yet staff alleged the annotation process repeatedly exposed them to explicit stories and images involving minors, with Grok on rare occasions generating such material. Workers were instructed to flag and isolate these outputs to prevent them influencing future training.

The toll on employees has been severe, with some citing psychological strain as the reason for leaving. One former staff member said developing “a thick skin” was necessary to cope with repeated exposure to CSAM. According to reports, xAI has since sought to recruit people with experience in adult content or who are comfortable annotating explicit material, underscoring the difficulties in building a safe training pipeline.

First Published on Sep 23, 2025 1:05 PM

More from Storyboard18