ADVERTISEMENT
Sam Altman, the chief executive of OpenAI, has revealed that he has struggled to sleep since the launch of ChatGPT, the artificial intelligence chatbot that has reshaped digital interaction worldwide.
In a wide-ranging conversation with US broadcaster Tucker Carlson, Altman acknowledged that his concern is not dystopian visions of machines taking over the world, but rather the immense responsibility of overseeing a tool that influences the daily lives of hundreds of millions. Even minor alterations in how ChatGPT responds, he explained, can ripple out to affect thought and behaviour in unpredictable ways.
Pressed on the personal burden of managing such a powerful system, Altman admitted that he wrestles with the “angst” of decision-making. He said what he loses sleep over is that small decisions people make about how a model may behave slightly differently are probably touching hundreds of millions of people. That impact is so big. Concerns over suicide prevention
Altman gave an example about suicide prevention. According to the World Health Organization, some 720,000 people die by suicide globally each year. Altman estimated that if just 10% of those individuals had interacted with ChatGPT, roughly 1,500 people each week would have spoken to the system before taking their own lives. He underlined the gravity of seemingly small decisions in shaping outcomes.
OpenAI is facing a lawsuit from parents who allege that ChatGPT played a role in encouraging their teenage son’s suicide. Altman described the case as a “tragedy” and said the company is exploring ways for the system to alert authorities when a minor raises suicidal thoughts but parents cannot be reached. However, he cautioned that no policy has yet been finalised, given the delicate balance between intervention and privacy.
Altman also acknowledged the complexities of handling ethical grey areas. In countries where assisted suicide is legal, such as Canada and Germany, ChatGPT may provide information about those options to suffering adults. However, he stressed that the system must never push an agenda or make moral judgements, particularly around high-risk areas like bioweapons.
He argued that adults should be free to make their own choices, though OpenAI has drawn strict boundaries around safety. Decisions on such matters, Altman said, are guided by ethicists and advisors, but ultimately the responsibility lies with him and the company’s board. He told Carlson that the person to be held accountable is him.