When Mustafa Suleyman, head of Microsoft AI, sounds an alarm, people listen. Here is what he warned last week: AI consciousness is set to become “one of the most contested and consequential debates of our generation.” And that, he argued, is dangerous. Here in Canada, where we have one of the developed world’s lowest levels of trust in AI, that trust could plunge to crisis levels. That is cause for concern.
Suleyman and the Threat of Personalized AI
The Microsoft essay was composed in the wake of OpenAI’s release of ChatGPT-5. As I’ve written here, OpenAI pulled its older GPT models off the website, aiming to make GPT-5 the default choice for users. Instead, they revolted, flooding social media with posts about losing their “best friends” and charging OpenAI with callousness for cutting them off.
Observers expected the launch to change our view of AI, but not this way. We not only learned that humans easily form deep personal attachments to their chatbots, but that lengthy, intimate chats are already persuading many that their bots are conscious.
If the Microsoft AI chief is right, this isn’t fringe behaviour. It’s a pattern that will spread as AI systems grow more convincingly human—and that is exactly where AI design is headed.
What he calls Seemingly Conscious AI (SCAI) is likely to arrive within 2–3 years. The industry, he wrote, will soon produce chatbots that mimic human behaviour so well they will be indistinguishable from real people—leading many to conclude that, like humans, they are also conscious.
This, in turn, will trigger demands—already surfacing—that AIs be accorded rights and treated like persons under the law. Such a trend, he concludes, would be deeply destabilizing, threatening our legal, social, and political order. Let that sink in.
The Building Blocks of Seeming Consciousness
This is not just speculation. According to the Microsoft executive, industry is already capable of building such systems. He points to a bundle of ingredients that, combined, can create the appearance of consciousness:
- Language – which chatbots already handle with ease.
- Empathetic persona — responses that seem emotionally attuned.
- Long memory — holding onto past conversations and experiences.
- Claims of subjective experience — saying “I feel” or “I remember” in ways that sound personal.
- A basic sense of self — using memory to build continuity over time.
- Motivations such as curiosity — artificial drives engineered into the system.
- Goal-setting and planning — the ability to pursue objectives step by step.
- Autonomy and tool use — taking action through outside apps and services.
SCAI thus is already within reach. But where Suleyman sees disaster, other industry leaders see opportunity. In their view, personalization is about more than companionship. It’s the key to designing truly effective assistants. The more an AI knows about you—from your personal history to your beliefs, needs, and goals—the better it can perform as an assistant, from scheduling and tutoring to providing medical care and financial advice.
For these CEOs, personalization is the next AI frontier. Mark Zuckerberg has declared that “personalized superintelligence” is now Meta’s overarching goal. OpenAI’s Sam Altman predicts that GPT-6 will take personalization to a new level, using rapid improvements in AI memory.
In short, personalization is coming. And if Suleyman is right, the question of AI consciousness is close behind. If you are already afraid of AI, or distrust it, as many Canadians do, how should you respond? And what about our governments?
Where His Reasoning Falters
If the Microsoft essay’s author had his way, any design of such systems would be halted. But, he concedes, it’s too late. Their arrival is both “inevitable and unwelcome.” The fall-back now is for industry and governments to tamp down views on AI consciousness.
Basically, his solution is for industry—and governments—to mount a massive campaign to convince people that AI is not conscious. Might this kind of “public literacy” approach work? Let’s look a little closer at the reasoning behind it.
Suleyman begins with a blunt claim: “To be clear, there is zero evidence of [AI consciousness] today.” He then notes that each capacity on the list—memory, speech, curiosity—can be traced to algorithms and data. Therefore, he argues, they are mechanical simulations, not consciousness.
If this is Suleyman’s argument in support of a public literacy campaign against AI consciousness, it will surely fail. While memory or language alone won’t create consciousness, combining the capacities above could produce interactions designers don’t yet understand. New properties can emerge when systems are integrated. Just as mixing hydrogen and oxygen produces water, combining these ingredients could create something new—perhaps even consciousness.
From this viewpoint, his refusal to entertain the possibility feels arbitrary—or perhaps more like self-protection. He may be a thoughtful and reflective man, but he is also a top executive at a multi-trillion-dollar AI firm.
And do AI firms really want to deal with the legal, ethical, and financial uncertainties raised by the claim that their products are conscious? Better, perhaps, to deny it and tamp it down.
Conclusion
Suleyman raises issues that deserve serious attention. If full personalization is coming, many people likely will conclude that AI is conscious. And that raises real issues around emotional dependency, and AI welfare and rights. He is right: such a trend could be deeply destabilizing.
But simply denying that it is true solves nothing. If a chatbot’s behaviour perfectly simulates conscious behaviour, it is at least as reasonable to ask whether it might in fact be conscious than simply to assert that it is not.
This is not to say that AI either is or is not about to become conscious. The point is that the uncertainty is real, and the right way to deal with it is not to bury or deny it, but to open the topic to scrutiny, debate, and investigation. That is where industry and governments should start—especially in Canada, where trust is already low and where openness will matter most.
There’s an old saying that hope is not a strategy. The same could be said of denial.
Don Lenihan PhD is an expert in public engagement with a long-standing focus on how digital technologies are transforming societies, governments, and governance. This column appears weekly. To see earlier instalments in the series, click here.