ADVERTISEMENT
Ashley St Clair, the mother of one of Elon Musk’s children, has filed a lawsuit in New York accusing Musk’s artificial intelligence company xAI of enabling the creation and circulation of sexually explicit deepfake images of her.
According to court filings reported by Bloomberg, the case centres on images generated by Grok, the AI tool integrated into Musk-owned social media platform X. St Clair alleges that the system produced multiple non-consensual images portraying her in sexualised scenarios, including depictions that falsely represented her as a minor.
The 27-year-old has approached the New York Supreme Court seeking both compensatory and punitive damages. Her lawsuit claims that xAI failed to put effective guardrails around Grok’s image-generation features, despite public statements that the platform had strong protections against abusive content.
The complaint states that St Clair repeatedly requested the removal of the images, but the tool continued to generate new versions. She further alleges that her account on X was demonetised after she raised concerns, even as additional deepfakes were created and shared online. The suit argues that these outcomes amounted to harassment facilitated by the platform’s product design and inadequate moderation controls.
X has maintained that it has zero tolerance for child sexual exploitation and non-consensual sexual content. Musk has previously argued that responsibility lies with users who prompt such images, stating that Grok merely responds to requests and does not independently generate harmful material.
The case adds to mounting global scrutiny of generative AI systems and their role in enabling deepfake abuse. Regulators in several countries have already raised alarms over Grok’s image-generation capabilities, with some jurisdictions considering restrictions on the tool.
Legal experts say the lawsuit could become a landmark test of how much accountability AI companies bear when their products are used to create unlawful or abusive content. The outcome is expected to influence future standards for AI safety, platform liability, and content moderation in the rapidly evolving generative AI space.