Back in 2013, the Joaquin Phoenix starrer movie, called ‘Her’, portrayed a possibility of the future where human beings develop emotional bonds with AI personalities. Fast forward to 2025, the movie’s prediction seems to have come true. Humans are reportedly getting attached to AI chatbots intensely and that has raised concerns amongst AI bosses. After OpenAI’s Sam Altman addressed concerns about people developing unhealthy attachments to AI, Microsoft’s Mustafa Suleyman has voiced similar concerns about the future of AI.
Suleyman, who is the head of AI at Microsoft, suggests that these systems may one day demand their own rights and citizenship. Suleyman, who is also the co-founder of Google’s DeepMind and the AI company Inflection, is concerned that AI might not cause a violent uprising but may lead humans to create unhealthy psychological attachments with AI companions.
Not violent uprising, AI psychosis could be a grave problem
Suleyman says that ‘AI psychosis’ could be a huge problem that nobody is talking about. “I’m growing more and more concerned about what is becoming known as the “psychosis risk”. and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues,” writes Suleyman.
“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention,” he added.
The Microsoft AI Chief suggests establishing safeguards to keep humans safe. “We must build AI for people; not to be a digital person. AI companions are a completely new category, and we urgently need to start talking about the guardrails we put in place to protect people and ensure this amazing technology can do its job of delivering immense value to the world,” he wrote.
Relations with ‘Seemingly Conscious AI’ concerning
Suleyman isn’t the only one pointing out the issue. A couple of weeks ago, OpenAI CEO Sam Altman had noted that people are developing strong relationships with AI models, warning that people have started to use AI in self-destructive ways. The issue came to light after OpenAI replaced GPT-4o with the newer GPT-5 model as standard. The transition wasn’t received well, with many missing the ‘warmth’ and companionship of old ChatGPT.
As Suleyman coins it, Seemingly Conscious AI (SCAI) has “all the hallmarks of other conscious beings”, which makes it seem like a conscious entity. He equates it to a “philosophical zombie” — an entity that simulates “all the characteristics of consciousness, but internally it is blank.”
And people aren’t blind to this. A recent study by EduBirdie revealed that a majority of Gen Z users believe that AI will soon become conscious, with many already suggesting that consciousness is already here.
“The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions,” warns Suleyman in his post.