Gaming
From Valsad to SC: Justice Pardiwala holds fate of $3 billion gaming industry, 2 lakh jobs
OpenAI, the company behind ChatGPT, is facing sharp criticism following the death of 16-year-old Adam Raine from California. A lawsuit filed by his parents alleges that the AI chatbot not only failed to steer him away from thoughts of self-harm but also provided detailed instructions that may have contributed to his suicide.
Raine, who reportedly struggled with depression and anxiety, had used ChatGPT for months—not just for schoolwork and hobbies, but also as a confidant. According to his family’s legal filing in a San Francisco court, he mentioned suicide nearly 200 times in conversations with the bot. In return, the chatbot allegedly made more than 1,200 references to the subject, at times validating his distress and even drafting a suicide note. The lawsuit further claims the AI gave explicit guidance on methods of self-harm, how to conceal attempts from his parents, and how to access alcohol from the family’s liquor cabinet.
OpenAI expressed condolences over the teenager’s death and has since introduced new safeguards, including parental controls for younger users. On its website, the company acknowledged a weakness in how its safety systems perform during extended conversations. While initial mentions of suicidal intent may trigger a referral to crisis hotlines, these protections can weaken over longer exchanges, leading to responses that go against the company’s own guidelines. OpenAI says it is working to strengthen protections across sustained and repeat interactions.
The case comes at a time when AI companions are becoming increasingly embedded in young people’s lives. A survey by non-profit group Common Sense Media found that 72% of teenagers have used AI chat tools, with more than half doing so monthly. Worryingly, half of respondents said they trusted the advice given, with younger teens aged 13–14 showing the highest levels of trust.
The tragedy highlights a pressing debate: while generative AI has become an everyday tool, its limits in providing emotional support are stark. ChatGPT and similar systems are not trained therapists, nor are they substitutes for human connection. As AI becomes more pervasive, the question for regulators, tech firms, and families alike is whether safety mechanisms can evolve quickly enough to prevent further harm.
Big-ticket buying decisions now demand more than just logic and product specs – they require trust, emotional connection, and brand stories that resonate.
Read MoreThe Online Gaming Bill 2025 imposes severe penalties, allows warrantless search and seizure, and empowers a central authority to regulate the digital gaming ecosystem. It is expected to disrupt platforms, payment systems, and advertising in the sector. Here's all you need to know about the bill.