The Carney government has hitched its economic reform agenda to AI—and for good reason. AI is reshaping everything from healthcare to defence. But there’s a catch: globally, Canadians rank near the bottom on trust in artificial intelligence. Abacus Data is picking up the same problem. As CEO David Coletto warns, that makes public support fragile. A savvy political leader—or an AI catastrophe—could easily trigger a backlash. AI Minister Evan Solomon has laid out a strategy to close this trust gap. But what would it take for you to trust AI?
Solomon’s Strategy
The Carney government believes AI can drive economic renewal, competitiveness, and sovereignty. At the heart of this strategy are what, in recent interviews, Solomon called the four pillars:
- Scaling domestic champions
- Increasing adoption
- Protecting sovereignty
- Building trust
The plan is timely—and vital. The government wants to build AI infrastructure that can transform Canadian businesses and assert economic sovereignty. But success depends on public buy-in. “Canadians need trust in order to try it out,” Solomon recently declared.
Except, these days, trust is a scarce resource for governments. Building more is a significant task for any minister. For the Minister of AI, it is especially daunting.
The Trust Gap
In a recent global study of 47 countries, Canada ranked 42nd in public trust in AI—and 25th out of 30 among advanced economies. Just 34% of Canadians said they trust information from AI, compared to a global average of 46%. In countries such as China and India, trust rates are over 70%.
Coletto’s polling tells a similar story. Only 28% of Canadians believe AI’s impact on society will be mostly positive. Meanwhile, 34% expect it to be mostly negative, and 30% remain unsure.
This puts Solomon in a tight spot. On one hand, he recognizes that people need confidence in the technology to embrace it. On the other hand, he believes that too much regulation chokes innovation. Getting AI regulation right, he says, is critical to Canada’s “economic destiny.”
Solomon's solution involves "light, tight, right" regulation, which mainly means emphasizing privacy and data protection.
Given the trust gap in AI, this sounds a little thin. If the government confined the use of AI to supporting businesses, then maybe. But the plan is much broader than that. AI will help power a new generation of nation-building projects—which is the “sovereignty” part of the plan.
So, ask yourself: Will better data protection and privacy really address these kinds of AI concerns?
What Kind of Trust?
Coletto doesn’t think so. His polling suggests Canadians' concerns over AI go far beyond jobs and productivity. What’s really at stake, he argues, is something deeper: “It’s about trust, fairness, and the role of humans in an increasingly automated world.”
Other research supports these findings, including:
- Lack of transparency and the sense that key decisions are being made “behind closed doors” (see here, here, and here)
- Erosion of personal freedoms and unchecked government power (see here and here)
- The spread of misinformation and manipulation of public opinion
- Long-term risks to human safety and the loss of control over AI systems
The takeaway for Solomon is that, when it comes to AI, the public feel they have no real part in shaping the future. They’re being asked to trust that government has already thought it through.
But we know just how tenuous this kind of trust is. Recent events show how quickly trust can collapse when people feel left out or left behind.
Everyone will recall the Freedom Convoy, which locked down central Ottawa in 2022. The protest was sparked by anger over the government’s COVID-19 vaccine policies. Ottawa failed to see this coming.
Could a crisis erupt around AI?
Coletto certainly thinks so. He warns that an enterprising political leader could easily exploit this mistrust. Others think leadership may not even be needed—just an event. They warn that a catastrophic AI event—such as a cyberattack, rogue system, or AI-enabled terrorism—is all but certain. Some experts place the odds of an extreme AI disaster—an “AI Chernobyl”—at over 10%.
Earning “Social License”
These concerns will surface—some already are. And few Canadians will be untouched.
But let's be clear: the government cannot address every public fear about AI. It can’t predict the full scope of its impact. It can’t promise to prevent every misuse or disruption. Asking the Minister to fill this gap would be to ask the impossible, especially given today’s public scepticism.
But neither should the government ignore these concerns. AI may be a hugely promising enabler, but risks must be mitigated, if public trust is to be secured. While government can't address them all, there is a better way to support its strategy, and it rests on the idea of social license.
Social license is a special kind of agreement. It doesn’t arise from safeguards or technical standards. It’s a public commitment—a kind of democratic permission—to let the government move forward with a project that carries real uncertainty. Not because every risk is known, but because people believe the risks will be faced honestly, and that the government will listen and adapt when issues emerge.
That gives government enough room to act, even though there may be uncertainty. People don’t expect perfection. But they want transparency. And they want to know they’ll be heard, should things go sideways.
Building this kind of trust requires more than technical safeguards. It means treating deep public concerns not as a communications problem—but as a democratic obligation to engage and inform. This gives the public a real stake in the strategy—a sense of ownership of it.
Other countries are already moving in this direction. The UK’s AI Safety Summit brought together a broad range of voices, including civil society and international observers. Singapore has launched a national AI literacy campaign. The European Union has treated participatory governance as central to its AI Act.
Canada has the opportunity to lead not just in AI investment—but in AI legitimacy. If the Carney government wants to secure the public’s trust, it must build a foundation broad enough to hold it. This starts with a willingness to include the public and speak honestly at every stage.
So, do you think the government needs your trust as it navigates these waters? If so, can you think of a better way to earn it?
Don Lenihan PhD is an expert in public engagement with a long-standing focus on how digital technologies are transforming societies, governments, and governance. This column appears weekly. To see earlier instalments in the series, click here.