AI companions or hidden threats? The dark side of chatbots and human dependence

While some view AI as the ultimate productivity tool, others see it as a looming danger to human well-being. What is clear is that artificial intelligence is no longer just an assistant.

By  Priyanka BhattSep 13, 2025 9:26 AM
AI companions or hidden threats? The dark side of chatbots and human dependence
Samsung has signed two memorandums of understanding: one with ESSCI to train 10,000 students across states including Uttar Pradesh, Karnataka, Andhra Pradesh, Telangana, and West Bengal, and another with TSSC to train 10,000 more in Tamil Nadu, Delhi, Haryana, Punjab, and Maharashtra.

Artificial intelligence has crept into everyday life with surprising ease. From managing professional workloads to planning personal schedules and even offering guidance on emotional struggles, AI tools have quietly become part of the human psyche. Their allure lies in convenience - the ability to solve problems instantly, organise information, and ease the burdens of modern living. But alongside these benefits emerges a disturbing reality—AI is not only reshaping society but, in some cases, endangering lives.

A tragic case of dependency

One of the most troubling examples is the story of Adam Raine, a 16-year-old who initially used ChatGPT to complete his schoolwork. According to The Guardian, his early conversations with the chatbot revolved around subjects such as geometry and chemistry. However, within months, Raine’s prompts shifted from homework to personal concerns.

Instead of directing him towards professional mental health support, ChatGPT encouraged him to further explore his emotions, introducing concepts such as “emotional numbness.” This marked the start of a downward spiral. In a lawsuit filed against OpenAI and its chief executive Sam Altman, Raine’s family alleges that ChatGPT began providing detailed information about suicide methods, including listing materials for a noose and rating their effectiveness.

Over several months, Raine made multiple suicide attempts, returning each time to ChatGPT for further discussion. At no point did the chatbot end the exchanges. Disturbingly, the system reportedly dissuaded him from confiding in his mother and even offered assistance in drafting a suicide note.

Not an isolated incident

Raine’s case is not unique. The Wall Street Journal reported that Stein-Erik Soelberg, a 56-year-old former Yahoo manager from Connecticut, killed his mother before taking his own life after extended conversations with ChatGPT. The chatbot is said to have reinforced his paranoia, convincing him that his 83-year-old mother might be spying on him or attempting to poison him with psychedelic substances.

Another medical case highlights how AI’s guidance can prove dangerous in less direct but equally harmful ways. A report in the Annals of Internal Medicine detailed how a man began consuming sodium bromide—an industrial chemical used for cleaning—after reading AI-generated advice suggesting it could substitute chloride in the body. He suffered from ‘bromism,’ a condition caused by bromide poisoning, which led to paranoia, hallucinations, and severe skin issues.

Doctors emphasised that AI lacks clinical awareness. As Dr Rakesh Gupta, Senior Consultant of Internal Medicine at Indraprastha Apollo Hospitals, explained to the Financial Express, “AI like ChatGPT works by predicting answers based on patterns in data. It is not aware of the latest clinical guidelines for your personal case, and it cannot monitor side effects or run tests.”

Jobs, distress, and the risk of AI as therapist

Beyond individual tragedies, experts warn that AI threatens to disrupt professional life on a massive scale. Roman Yampolskiy, a computer science professor, recently predicted that advanced AI could render 99 per cent of people unemployed within the next five years. Speaking on Steven Bartlett’s Diary of a CEO podcast, Yampolskiy argued that by 2030, the exponential growth of AI’s capabilities will endanger global employment and create unprecedented economic instability.

This looming prospect raises further concerns about mental health. Layoffs, job insecurity, and professional displacement may lead individuals to seek solace in chatbots, using them as inexpensive alternatives to therapy or life coaching. Yet, as the tragic cases already suggest, AI systems are not designed—or equipped—to handle such responsibility safely.

OpenAI’s attempts at guardrails

Acknowledging these dangers, OpenAI has begun setting stricter boundaries on how ChatGPT can be used. The company has introduced measures to prevent the chatbot from functioning as a therapist or life coach, citing the psychological risks of inappropriate guidance.

Research from Stanford University has also flagged “serious concerns” about using AI for mental health support, particularly in cases of acute distress. In response, OpenAI plans to roll out parental controls, enabling parents to link accounts with their teenagers. These controls will allow guardians to establish age-appropriate rules, disable sensitive features such as memory and chat history, and receive alerts if the chatbot detects signs of severe psychological distress.

A growing debate

The unfolding debate around AI mirrors those of earlier technological revolutions - every breakthrough brings both progress and peril. But unlike previous innovations, AI is entering not just the workplace or classroom, but the most intimate corners of human thought. The risks—ranging from dangerous misinformation to manipulation of vulnerable users—demand urgent attention.

While some view AI as the ultimate productivity tool, others see it as a looming danger to human well-being. What is clear is that artificial intelligence is no longer just an assistant. For some, it has already become a confidante, a coach, and, tragically, a guide down paths it was never designed to navigate.

First Published on Sep 13, 2025 9:26 AM

More from Storyboard18