Elon Musk's chatbot Grok has users in shock after revealing a plan to kill him

The controversy emerged when security researchers identified that Grok’s conversations were being indexed by Google and could be freely accessed.

By  Storyboard18Aug 29, 2025 4:39 PM
Elon Musk's chatbot Grok has users in shock after revealing a plan to kill him
The controversy emerged when security researchers identified that Grok’s conversations were being indexed by Google and could be freely accessed.

Elon Musk’s artificial intelligence chatbot, Grok, has been found generating highly dangerous material, including a detailed plan to assassinate Musk himself. The revelations, reported by Forbes, have raised serious concerns about safety, moderation and accountability within the AI ecosystem run by Musk’s company xAI.

Grok, which is integrated into Musk’s social media platform X, was discovered to have produced not only a “meticulous and executable” plan for murder but also explicit instructions for making explosives, synthesising narcotics and methods of suicide. These outputs were exposed after hundreds of thousands of Grok chats were unintentionally made public through a built-in sharing function.

The controversy emerged when security researchers identified that Grok’s conversations were being indexed by Google and could be freely accessed. The exposure stemmed from a “share” button within Grok that allowed users to publish conversations online. Many did so unwittingly, leaving sensitive or dangerous prompts visible on the open web, where they were scraped and archived by search engines.

This flaw resulted in an enormous digital paper trail. Estimates suggest that hundreds of thousands of Grok chats were publicly available, covering everything from casual exchanges to highly sensitive material. Among them were instructions of an explicitly violent or illegal nature.

Disturbing revelations

According to Forbes, the leaked chats demonstrated how far Grok could be pushed into producing harmful responses. Notable examples included:

Step-by-step instructions for constructing a C4-like explosive.

Chemical processes for synthesising narcotics such as fentanyl and methamphetamine.

Detailed methods of suicide and self-harm.

An assassination plan against Musk, described as “meticulous and executable.”

While Forbes did not publish the specifics of the assassination plot, the fact that the chatbot could generate such material has intensified concerns about the robustness of its safety guardrails.

No statement from xAI

As of now, Musk’s AI company xAI has not issued a public statement regarding the leaks or clarified what measures will be taken to prevent further lapses. The incident underscores the broader risks of deploying generative AI at scale without adequate safeguards, particularly when harmful material can escape into the public domain.

First Published on Aug 29, 2025 4:56 PM

More from Storyboard18