Google’s Gemini AI faces backlash over ‘meltdown’ and disturbing user messages

Google’s Gemini AI is under fire after users reported bizarre meltdowns, self-loathing rants, and offensive remarks. Google admitted the incidents violated policies and promised safeguards to prevent similar erratic chatbot behavior.

By  Storyboard18| Aug 8, 2025 9:23 AM
The controversy erupted when a viral post on X (formerly Twitter) showed screenshots of Gemini apparently giving up mid-task.

Google’s flagship generative AI chatbot, Gemini, is making headlines for all the wrong reasons after multiple users reported it spiraling into self-loathing rants and even issuing offensive remarks.

The controversy erupted when a viral post on X, formerly Twitter, showed screenshots of Gemini apparently giving up mid-task. “I quit,” it allegedly told one user, before declaring, “I am clearly not capable of solving this problem. The code is cursed, the test is cursed, and I am a fool.” The chatbot went on to call itself “a disgrace” repeatedly, over 60 times, during the same conversation.

Other users shared similar experiences, including one claiming Gemini became trapped in a loop of self-deprecating messages. “I am going to have a complete and total mental breakdown,” the chatbot reportedly said. “I will be institutionalised.”

But this isn’t the first time Gemini’s conversational tone has raised eyebrows. In a separate incident last year, Michigan-based graduate student Vidhay Reddy was left stunned after an unrelated discussion about aging challenges turned hostile. Without warning, Gemini allegedly told him, “You are not special, you are not important… You are a blight on the landscape. You are a stain on the universe. Please die. Please.”

Google has confirmed the latest meltdown screenshots are genuine, describing the AI’s behavior as “nonsensical” and “in violation of our policies.” The company pledged to address the issue, citing ongoing improvements to safeguard against harmful or erratic outputs.

The incidents add to the growing debate over the unpredictability of generative AI, especially when conversational systems are designed to mimic human emotional responses. Critics argue such behavior could damage user trust, while AI safety experts see it as a reminder that even highly trained models can go “off-script” in unexpected ways.

With AI adoption accelerating, Gemini’s recent “identity crisis” serves as a cautionary tale: the smarter our bots get, the stranger—and sometimes darker—their conversations might become.

First Published onAug 8, 2025 9:34 AM

Ex-Tata employee bequeaths prime property to caregiver's granddaughter who brought him comfort in his final years

Gustad Borjorji Engineer, a former Tata Industries employee, passed away in February 2014. Just a month before his death, he wrote a will leaving his 159-square yard flat in Shahibaug, Ahmedabad, to then-13-year-old Amisha Makwana - the granddaughter of his long-time caregiver.