“Canadians are largely wary of AI, with only a third willing to trust AI. Acceptance of AI is the lowest of all the Western countries surveyed.”
—Trust in Artificial Intelligence
As the US election intensifies, polarization has become a defining issue. But it’s not just an American battleground—it’s a fault line across the G7, shaping politics and the future of AI. A growing divide over benefits and risks could split countries and alliances. If the G7 can’t bridge this gap, it risks ceding AI leadership to others, possibly with very different values.
G7 Wary as the BRICS Embraces AI Optimism
A recent study from the University of Queensland in Australia reveals a deep divide in the global view of Artificial Intelligence (AI). While countries with developing economies—such as Brazil, India, China, and South Africa (BRICS, but without Russia)—are generally optimistic and trusting of AI, G7 countries are decidedly more cautious and skeptical, as the chart below shows.
Fig. 1: Optimism and Trust in AI by Country
(Ratings from Brazil and South Africa are consistent with those in China and India, while those from other G7 countries are consistent with Canada and the UK.)
This optimism is also reflected in the BRICS’ view of AI’s risks and benefits. In China, 81 percent think AI benefits outweigh the risks, while 69 percent of Indians believe this. In Canada, only 42 percent think benefits outweigh risks, and the US is even lower, with 41 percent expecting more benefits. Other G7 nations hover around 40 percent.
Fig 2: AI Benefits Outweighing Risks by Country
The study explains this asymmetry by suggesting that people in “emerging economies” see AI “as a means to accelerate economic progress, prosperity, and quality of life.” They think AI could close the economic gap with the West—and they regard that as a very positive thing. But what does this asymmetry tell us about AI in the West?
Reasons for Negativity
As the country least trusting of AI, Canadians’ views may shed light on the skepticism in G7 countries. A February 2024 Leger Marketing study notes the following:
- Deepening Concern: The number of Canadians who see AI as harmful climbed from 25% in 2023 to 32% in 2024, indicating growing unease over AI.
- Privacy and Dependency Worries: 81% reported concerns over societal dependence on AI and privacy risks, suggesting that Canadians’ growing experience with AI is leading to unsettling conclusions.
- Job Impacts: 80% of employed Canadians think AI will affect their work within the next year. Unlike the BRICS countries, where AI is seen as a driver of growth, Canadians fear that that AI will replace them in the workplace.
- Trust Varies by Application: Canadians’ trust in AI is related to the sort of task it is performing. The highest score goes to home tasks (58%), with lower rating for sensitive areas like healthcare (30%) or childcare (15%). So, while Canadians think AI may be suitable for mechanical tasks, apparently, they do not trust it for tasks involving personal interaction.
These findings show how Canadians’ views on AI go beyond economic opportunity, extending into issues of privacy, dependency, and trust. There is lots of anecdotal evidence that these views are shared by other G7 countries. The main takeaway is that, while AI may promise economic gains, many in the West also see it as a source of social and economic disruption. Caution and skepticism are rising as their experience grows.
The SRI Study
A second global study—this one from the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto—sheds more light on this situation. Rather than seeing G7 countries as increasingly cautious and skeptical, it reveals a growing split in these countries between optimists, skeptics, and undecideds (see Fig. 3).
By contrast, the view of AI in the BRICS is significantly more cohesive. In India, for example, 74% have a Very or Fairly Positive View of AI, while only 12% have a Very or Fairly Negative view of it, with 14% having no view. China registers 66% on the positive side and only 3% on the skeptical side (with 30% undecided).
Fig 3: Public View of AI by Country
These findings highlight a critical and emerging trend in the AI culture of G7 countries. The split between optimists and skeptics is part of a larger, polarizing trend. On one hand, AI's potential impact on jobs, privacy, and security fuels fears among skeptics; on the other hand, the promise of economic growth, and advances in healthcare and science, galvanize AI optimists.
This dynamic feeds off the “undecideds” in the middle. While that group is shrinking, the two other groups are growing—and so is the tension between them. If this continues, AI is at risk of becoming a new battle zone in the culture wars of Western democracies. This could be the most important AI challenge facing G7 governments.
Realigning AI Culture
Over the next five years, AI is expected to pour more than a trillion dollars into the world economy, promising huge productivity gains, scientific discoveries, innovations, new medical treatments, and more.
At the same time, we will start to see significant social and economic disruption. Sharply polarized populations could make the adjustments far more difficult—even unmanageable, paralyzing governments and weakening AI governance.
Others will move in to fill the AI gap. China, already a global leader in AI, aims to be the world leader by 2030. Its ability to draw on a supportive population and to use authoritarian-style government to advance its policies could be a decisive advantage, leading to rapid growth and development of its AI capacity.
No G7 government believes this would serve the West. Indeed, US President Joe Biden’s recent AI National Security Memorandum targets China’s AI ambitions, identifying them as a top national security concern. Ensuring American leadership is now a top priority.
That doesn’t go far enough. To avoid falling behind, G7 nations must heal themselves. Polarization must be reversed, while there is still time. Aligning their publics around the goal of balancing growth with safety should be a top G7 priority, and the members should work together to define a strategy to achieve it.
Achieving this balance will require deep collaboration among governments, academia, and industry to promote AI innovation and education, and regular public consultations on AI policies to build trust and align with public values.
Most importantly, however, it would make AI literacy its cornerstone.[i] Any effort to reshape the West’s AI culture must start by raising public awareness and understanding of the risks and benefits of AI, and of the challenges of aligning safety with growth.
Perhaps the biggest factor in polarization is the lack of AI literacy among G7 populations. Knowing little or nothing about AI leaves them vulnerable to disinformation and manipulation—and these are already key drivers.
There is a choice here. AI is advancing at breakneck speed. It will redefine economies and societies. Governments cannot stop this, but they can help shape the changes.
Countries who fail to act decisively risk getting left behind—ceding the future to others whose view of Western democratic values may be dismissive, if not contemptuous. There may not be a second chance.
Don Lenihan PhD is an expert in public engagement with a long-standing focus on how digital technologies are transforming societies, governments, and governance. This column appears weekly. To see earlier columns in the series, click here.
[i] While the G7 Toolkit for AI in the Public Sector, released in October 2024, emphasizes the growing importance of AI literacy and digital skills, these are not central to the approach, let alone its cornerstone.