AI dangers: OpenAI's Sam Altman warns of emotional bonds forming between users and AI

OpenAI CEO Sam Altman says the strong emotional attachments users form with AI models present new ethical and societal challenges, urging measured releases to avoid harm while preserving user freedom.

By  Storyboard18Aug 11, 2025 8:26 AM
AI dangers: OpenAI's Sam Altman warns of emotional bonds forming between users and AI
According to Altman, this mindset is not a fringe phenomenon but a “widespread pattern” among young users, particularly those in their teens and twenties.

In a candid reflection on the rollout of OpenAI’s latest technology, GPT-5, the company’s chief executive, Sam Altman, warned that people are developing unusually deep attachments to AI models - a phenomenon he says carries profound risks alongside potential benefits.

“If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models,” Altman wrote in a recent post. “It feels different and stronger than the kinds of attachment people have had to previous kinds of technology… and so suddenly deprecating old models that users depended on in their workflows was a mistake.”

The post — which Altman stressed reflects his “current thinking” rather than official company policy - touches on a growing dynamic in human-machine relationships. For the past year, he said, OpenAI has tracked the phenomenon, though it has attracted little mainstream attention except when the company briefly released a more “sycophantic” update to GPT-4o.

Altman acknowledged the need to balance innovation with safeguards, particularly for users in “a mentally fragile state and prone to delusion.” While “most users can keep a clear line between reality and fiction or role-play,” he cautioned, a small percentage cannot. “We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks,” he said.

The challenge, Altman suggested, lies not only in obvious harms but in subtle, long-term effects. “If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful,” he wrote. “If… users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer-term well-being, that’s bad.”

Many people already turn to ChatGPT as “a sort of therapist or life coach,” Altman noted, often with positive results. But he expressed unease about a future where “billions of people may be talking to an AI” for their most important decisions. “Although that could be great, it makes me uneasy,” he said.

Altman believes OpenAI has an opportunity to “get this right” thanks to new tools for measuring outcomes, such as collecting direct feedback from users and training models to understand nuanced issues. “We… have to figure out how to make it a big net positive,” he said, framing the task as both a societal challenge and a corporate responsibility.

First Published on Aug 11, 2025 8:25 AM

More from Storyboard18