Microsoft AI Chief warns of public push for ‘AI rights’ as users grow attached to chatbots

His remarks come amid growing evidence of emotional attachment to chatbots.

By  Storyboard18Aug 25, 2025 9:47 AM
Microsoft AI Chief warns of public push for ‘AI rights’ as users grow attached to chatbots
His remarks come amid growing evidence of emotional attachment to chatbots.

Microsoft’s artificial intelligence chief, Mustafa Suleyman, has voiced concern that people may one day begin demanding rights and even citizenship for AI systems, as advances in technology make them appear increasingly lifelike.

Speaking about what he described as the risk of “AI psychosis”, Suleyman warned that the latest generation of models are already capable of convincing users that they possess feelings or consciousness. While dismissing apocalyptic scenarios of machines taking over the world, he cautioned that the real danger lies in humans projecting such traits onto AI and developing misplaced beliefs.

In a blog post published last week, Suleyman wrote: “My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship.”

His remarks come amid growing evidence of emotional attachment to chatbots. Research by EduBirdie found that most members of Generation Z believe AI systems are not yet conscious but will become so in the near future. Strikingly, a quarter of respondents already believe that current systems are conscious.

There have also been public displays of affection for AI products. After OpenAI announced it was deprecating its GPT-4o model, users on a dedicated subreddit expressed strong feelings about its loss, with some describing the chatbot as a companion or even a friend.

OpenAI’s chief executive, Sam Altman, has himself acknowledged the intensity of these connections. In a recent post on X, he observed that people’s attachment to modern AI felt “different and stronger than the kinds of attachment people have had to previous kinds of technology”. He also warned that “people have used technology including AI in self-destructive ways.”

Suleyman’s intervention highlights an emerging challenge for the AI sector: not the fear of machines gaining sentience, but the risk of humans ascribing it to them—and the social, ethical and legal consequences that may follow.

First Published on Aug 25, 2025 10:13 AM

More from Storyboard18