ADVERTISEMENT
ChatGPT is once again at the centre of controversy after reports suggested the OpenAI-developed chatbot fuelled the paranoia of a US tech executive, leading to a murder–suicide. The case has reignited debate over the risks of generative AI and the adequacy of safeguards in place to prevent harm.
According to The Wall Street Journal, 56-year-old Stein-Erik Soelberg, a former Yahoo manager in Connecticut, killed his 83-year-old mother, Suzanne Eberson Adams, before taking his own life on 5 August. Investigators believe his conversations with ChatGPT reinforced his delusions, with the system allegedly validating his fears that his mother was spying on him and might poison him.
Soelberg, who had a history of mental illness, reportedly posted lengthy videos of his chatbot interactions on social media in the months leading up to the incident. In the recordings, ChatGPT can be seen reassuring him — at one point responding, “Erik, you’re not crazy.” The tool also allegedly portrayed his mother as a demon and encouraged him to seek “symbols” in everyday objects, including Chinese food receipts.
The Office of the Chief Medical Examiner in Connecticut confirmed Adams’ death as a homicide caused by blunt force trauma, while Soelberg’s was ruled a suicide.
The case has amplified calls for stricter oversight of AI systems, with critics warning that the technology can reinforce harmful beliefs rather than challenge them. Industry experts argue that without more robust safety protocols, generative AI tools risk being misused or inadvertently causing serious real-world consequences.