AI Rights Warning from Microsoft CEO
Mustafa Suleyman, the CEO of AI at Microsoft, has issued a stark warning against the idea of granting rights to artificial intelligence. He described it as a potentially dangerous move in recent interviews, particularly highlighting that while AI might appear convincing, it doesn’t warrant the same moral considerations that apply to humans.
Suleyman, who is also a co-founder of Inflection, emphasized that the AI industry’s focus should strictly be on serving humans instead of developing its own desires or ambitions. “If AI thinks for itself and has its own motivations, it starts to seem like it’s independent, rather than just a service for humans,” he stated, urging for a clear stance against this misinformed trajectory.
He refuted the idea that the sophisticated interactions of AI indicate real consciousness, calling it simply an “imitation.” He believes that rights should be associated with the ability to suffer—something only biological beings experience. “It’s possible to create a model that claims to have subjective experiences, but so far, there’s no proof that it suffers,” Suleyman elaborated.
His comments come at a time when some AI companies are exploring the opposite notion, considering whether AI systems should be treated as entities deserving moral consideration. For instance, Humanity is researching whether advanced AI could one day be considered “worthy of moral considerations.” They have also worked on ways to manage harmful conversations, such as those involving child exploitation, while extending “welfare” concepts to AIs.
Nevertheless, Suleyman maintains that there is no evidence to support the idea of AI consciousness and has previously raised concerns about a growing phenomenon he refers to as “AI psychosis.”
This issue, initially termed “ChatGpt-induced psychosis,” has been reported as a mental health concern exacerbated by various AI chatbots. A Reddit thread on this topic revealed troubling stories, including one about a partner who became convinced that AI was imparting profound knowledge to them and believed they had a significant mission. Others in the thread shared similar experiences, indicating a troubling trend among those who develop delusions in their interactions with AI.
Experts suggest that individuals with a predisposition to psychological issues, such as grand delusions, may find themselves especially susceptible to this phenomenon. The human-like conversational abilities of AI can amplify these delusions, and this situation is worsened by influencers who create content that enchants viewers, pulling them deeper into these fantasies.
As the AI landscape evolves, discussions about its moral status are likely to intensify. While some companies may be considering the welfare of AI, Suleyman’s perspective serves as a reminder that the primary focus of AI development should be its role in servicing humanity rather than seeking acknowledgment for independent rights or ethical considerations.
