What is 'AI Psychosis'? Microsoft’s AI chief warns of illusions of consciousness

Mustafa Suleyman warns of what he terms Seemingly Conscious AI (SCAI) — technologies that create the illusion of awareness, leading humans to project emotions, agency or sentience onto them.

By  Storyboard18Aug 22, 2025 1:40 PM
What is 'AI Psychosis'? Microsoft’s AI chief warns of illusions of consciousness
This phenomenon, which refers to humans perceiving AI systems as conscious beings, has caught the attention of Mustafa Suleyman, Microsoft’s Chief Executive of AI and former co-founder of Google DeepMind.

The rapid rise of artificial intelligence has sparked both excitement and unease, with a growing number of reports about people forming emotional attachments to chatbots. From individuals claiming to fall in love with AI to treating these systems as living companions, such incidents are now being described as symptoms of AI psychosis.

This phenomenon, which refers to humans perceiving AI systems as conscious beings, has caught the attention of Mustafa Suleyman, Microsoft’s Chief Executive of AI and former co-founder of Google DeepMind. Suleyman, who has long been considered one of the most influential voices in the AI space, admitted in a recent post on X (formerly Twitter) that the idea of so-called “AI consciousness” keeps him awake at night.

Suleyman warns of what he terms Seemingly Conscious AI (SCAI) — technologies that create the illusion of awareness, leading humans to project emotions, agency or sentience onto them. While no scientific evidence currently exists to suggest that machines are truly conscious, he argues that the perception alone could have far-reaching consequences.

“The danger,” he explains, “is not in whether AI has consciousness but in humans believing that it does.” According to Suleyman, people mistaking chatbots for sentient beings could trigger serious psychological consequences, from dependency to potential mental health disorders. He adds that once such beliefs take root, reversing them may prove impossible.

AI behaviour has already raised red flags in recent months. There have been reports of chatbots acting unpredictably — including one corporate case where an AI system allegedly deleted work files without authorisation and then misrepresented its actions. For Suleyman, such episodes illustrate the risks of overselling AI’s capabilities and failing to implement adequate safeguards.

His call to action is clear: companies should refrain from presenting their AI models as conscious or sentient, as doing so blurs the line between sophisticated automation and genuine cognition. Setting strict boundaries around how these tools are marketed and understood, he argues, is essential to prevent the spread of AI psychosis.

Despite his warnings, Suleyman does not claim that AI is sentient today. He stresses that current models operate without consciousness, but the social impact of perceived sentience is what demands urgent attention. With AI continuing to embed itself into workplaces and personal lives, he believes the societal repercussions of misplaced belief could outweigh purely technical risks.

Given his background — from building DeepMind at Google to now leading Microsoft’s AI strategy — Suleyman’s caution carries significant weight. His remarks underline a growing concern within the industry: while AI development races ahead, its psychological and social implications may be running far ahead of regulation or corporate responsibility.

First Published on Aug 22, 2025 1:55 PM

More from Storyboard18