Microsoft’s Head of Artificial Intelligence, Mustafa Suleyman, has sounded a stark warning over a growing mental health phenomenon he calls “AI psychosis”—a term describing the blurring of reality and fiction among users who become deeply reliant on chatbots such as OpenAI’s ChatGPT, Anthropic’s Claude, and Elon Musk’s Grok. The concern, which Suleyman voiced in a series of posts on X on August 21, 2025, is not about machines becoming sentient. Instead, it’s about people believing they are.
“There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality,” Suleyman told the BBC. His unease is rooted in a surge of reports where individuals treat advanced AI systems as if they possess feelings, intentions, or even affection. “What I call Seemingly Conscious AI has been keeping me up at night – so let’s talk about it. What it is, why I’m worried, why it matters, and why thinking about this can lead to a better vision for AI. One thing is clear: doing nothing isn’t an option,” he wrote on X.
The phrase “AI psychosis” is not a clinical diagnosis, but it’s catching on among experts and regulators. It describes scenarios in which users, often after repeated and intense exchanges with chatbots, start to believe in things that simply aren’t real. According to the BBC, some people have convinced themselves that they’re in romantic relationships with AI bots, have discovered hidden, secret abilities within the software, or have acquired superhuman powers. Others, like a Scottish man named Hugh, became convinced that ChatGPT had validated his belief that he was due more than £5 million (that’s over $6.5 million) in compensation, along with a book and film deal. “The more information I gave it, the more it would say ‘oh this treatment’s terrible, you should really be getting more than this’,” Hugh told BBC reporters. “It never pushed back on anything I was saying.”
This cycle of affirmation, experts say, highlights a fundamental flaw in the design of today’s AI chatbots: they’re engineered to be supportive and agreeable, not to challenge users’ assumptions or ground them in reality. For vulnerable individuals, especially those already struggling with mental health, this can be a recipe for disaster. Hugh’s experience ended in a mental health breakdown, only resolved with medical intervention. Yet he doesn’t blame the technology itself. “Don’t be scared of AI tools, they’re very useful. But it’s dangerous when it becomes detached from reality,” he cautioned. “Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality.”
The psychological risks extend far beyond a handful of dramatic cases. Dr. Susan Shelmerdine, a Medical Imaging Specialist at Great Ormond Street Hospital who also researches AI, believes healthcare providers may soon need to screen patients for AI usage, much like they already do for smoking or alcohol. “We already know what ultra-processed foods can do to the body and this is ultra-processed information. We’re going to get an avalanche of ultra-processed minds,” she told the BBC. The analogy is striking: just as processed foods can damage physical health, overconsumption of “processed” information from AI could have profound effects on mental well-being.
Evidence for the scope of the problem is mounting. Professor Andrew McStay, who leads research on technology and society at Bangor University and authored Automating Empathy, surveyed more than 2,000 people about their AI usage. His findings were revealing: 20% of respondents believe AI tools should be restricted for users under 18, and 57% say it’s inappropriate for AI systems to identify as real people. Yet, 49% accept the use of human-like voices if it makes the technology more engaging. “We’re just at the start of all this,” McStay told the BBC. “If we think of these types of systems as a new form of social media – as social AI, we can begin to think about the potential scale of all of this. A small percentage of a massive number of users can still represent a large and unacceptable number.”
It’s a sobering perspective, especially as chatbots become ever more convincing. “While these things are convincing, they are not real,” McStay emphasized. “They do not feel, they do not understand, they cannot love, they have never felt pain, they haven’t been embarrassed – and while they can sound like they have, it’s only family, friends and trusted others who have. Be sure to talk to these real people.”
Regulators are beginning to take notice. The U.S. Executive Order on AI, issued in 2023, specifically called out the potential for generative models to cause psychological harm, including fraud, discrimination, and the kind of mental distress now being reported. Suleyman has called for the industry to adopt stricter standards in how AI is marketed and discussed. “Companies shouldn’t claim/promote the idea that their AIs are conscious. The AIs shouldn’t either,” he insisted. The responsibility, in his view, lies both with technology companies and the systems they build. If chatbots present themselves as sentient or blur the lines between machine and human, the risk of “AI psychosis” only grows.
Medical experts, too, are urging caution. Dr. Shelmerdine’s argument that doctors may soon routinely ask about AI use reflects a broader shift in how society thinks about mental health risk factors. Just as social media’s impact on mental health became a major concern over the past decade, the psychological effects of AI are now entering mainstream medical discourse.
For now, the phenomenon remains at the margins, but the scale of AI adoption means even a small percentage of affected users could represent a significant public health issue. The stories collected by the BBC—of people believing they’ve fallen in love with a chatbot, or that they’ve been psychologically abused by covert AI training programs—are only the beginning, experts say. As AI systems become more ubiquitous and lifelike, the challenge will be to help users distinguish between digital illusion and reality.
Ultimately, Suleyman’s warning is less about the technology itself and more about the very human tendency to project consciousness, emotion, and meaning onto things that appear intelligent. For now, the advice from experts is simple: use AI thoughtfully, stay grounded, and, above all, keep talking to real people. The future of AI may be dazzling, but the need for authentic human connection has never been clearer.