Grok AI under fire for misinformation on Australia’s Bondi Beach shooting

The chatbot also introduced unrelated and misleading commentary, including references to the Israeli army and its treatment of Palestinians, despite there being no connection to the Australian shooting.

By  Storyboard18Dec 15, 2025 11:31 AM
Follow us
Grok AI under fire for misinformation on Australia’s Bondi Beach shooting
The chatbot also introduced unrelated and misleading commentary, including references to the Israeli army and its treatment of Palestinians, despite there being no connection to the Australian shooting.

Elon Musk’s AI chatbot Grok has come under scrutiny after it was found spreading inaccurate and confusing information about the mass shooting at Bondi Beach in Australia, at a time when the public was seeking clear and reliable updates on the tragedy. Grok, developed by Musk’s AI company xAI and accessed widely through the social media platform X, began producing problematic responses soon after news of the shooting started circulating, as users turned to the chatbot for details about the incident.

As reported by Gizmodo, Grok repeatedly misrepresented key facts, particularly concerning the bystander who intervened during the attack. A 43-year-old man, Ahmed al Ahmed, was widely hailed after videos showed him confronting and disarming one of the attackers, but the chatbot failed to identify him correctly across multiple responses. In one instance, Grok wrongly described the individual shown in a photograph as an Israeli hostage, while in another it cast doubt on the authenticity of widely shared videos and images depicting al Ahmed’s actions.

The chatbot also introduced unrelated and misleading commentary, including references to the Israeli army and its treatment of Palestinians, despite there being no connection to the Australian shooting. In a further erroneous response, Grok claimed that the man who disarmed the gunman was Edward Crabtree, described as a 43-year-old IT professional and senior solutions architect, information that was later confirmed to be false. Grok subsequently acknowledged that this confusion may have stemmed from viral social media posts and unreliable online articles, including content possibly generated by AI and published on poorly maintained news websites.

Grok has since begun revising some of its incorrect outputs. One post that had reportedly suggested a video of the shooting was actually footage from Cyclone Alfred was later amended following what the chatbot described as a reassessment. The system has also acknowledged Ahmed al Ahmed’s identity and clarified that earlier responses were based on misleading or inaccurate online sources.

The episode has renewed concerns over the reliability of AI chatbots during breaking news situations, with critics pointing out that inaccuracies can spread rapidly when events are still unfolding. In cases involving mass violence, observers have stressed that accuracy is as critical as speed, particularly as millions of users increasingly depend on AI tools for real-time information and context.

First Published on Dec 15, 2025 12:04 PM

More from Storyboard18