Disinformation and Polarization in The Age of AI— Are Democracies at a Tipping Point?

  • National Newswatch

The two of us were part of a panel discussion at the recent IIC Conference on how disinformation drives polarization, and how Artificial Intelligence (AI) might impact this trend. The audience found our views very engaging, and so we thought you might too. We’ve also included some thoughts on the US election.

Here’s the plan: First Frank introduces what he calls the "Vicious Cycle of Disinformation," then Don explains how AI could change the dynamic—for better or worse. We close with recommendations on how governments should prepare for the future.

Frank’s Theory of the Vicious Cycle of Disinformation

 

The graphic above is my (Frank’s) attempt to distill and chart the forces that have produced the dark and divided Canada we see today. The model helps us understand the complexity and sequencing of these forces, which is important if we are to make progress on reversing the cycle of polarization, mistrust and disinformation that now threatens our politics.

At a time when there is little agreement on how to proceed with key national issues, I note that polarization has multipartisan support as a national priority. Nor are these forces unique to Canada. As we’ll see, the model does an excellent job of explaining Donald Trump’s clear victory in last week’s US election. 

Stage 1: Economic Insecurity: The Cycle of Disinformation starts with economic insecurity and the collapse of shared prosperity. This is important as these problems did not emerge because of Donald Trump, Brexit, or the pandemic. Those are symptoms, not causes. The cycle was set in motion by an economy that no longer features positive intergenerational mobility, home ownership, or a secure retirement, all of which were in place in the last century.

Stage 2: Cultural Insecurity: Next, economic insecurity expresses itself in cultural insecurity—what we call “ordered” and others call “authoritarian” populism. This features a values backlash and hostility to outgroups. For instance, opposition to the immigration of visible minorities has risen ten-fold in recent years and is almost universal amongst those attracted to ordered populism. Immigration, I note, was a crucial force shaping Trump’s victory.

Stage 3: Mistrust: In the third stage of the cycle, we see the related but independent collapse of institutional trust, which includes government, media and science. The late pandemic movement expressed in the Freedom Convoy was a clear (not a fringe) expression of these forces. 

Stage 4: Disinformation: Perhaps most concerning is the rise of disinformation, which further undermines trust with sophisticated algorithms, used as tools of statecraft, and domestic sources, such as social media platforms and political groups. Disinformation is now the most potent predictor of partisanship in Canada, and it was a critical force in the US election where majorities of Trump voters believed that inflation continued to rise, that he had won the last election, and that Haitians were eating dogs and cats. 

In Canada there are similar massive partisan differences on whether climate change is a hoax, governments are intentionally concealing the real numbers of deaths from vaccines, and so on. Despite near universal support for banning Generative AI from politics, this is indeed happening, including a late AI deepfake showing Trump communing with Martin Luther King.

Stage 5: Polarization: The net result of these first four stages is a fifth stage of intense polarization on key issues, which has begun and is having extremely corrosive impacts on national unity and the public interest.

Don’s Take on AI and Persuasion

Frank’s Vicious Cycle theory provides a bird’s-eye view of the polarizing trends in Western democracies over the last few decades. This trend, he suggests, is reaching a tipping point—and the U.S. election may well be that point. So, what comes next?

With AI evolving rapidly, its potential to amplify Frank’s cycle—or disrupt it—can’t be overstated. Here’s why.

Frank has already mentioned deepfakes: AI-made images, videos, or voice impressions that look so real we can’t tell them from the genuine article. Deepfakes could further erode trust in information sources like traditional and social media. But in fact, when it comes to AI, deepfakes are yesterday’s news. AI’s truly disruptive power lies not in its capacity for fakes, but in its growing power to persuade.

Let’s start with reasoning and argument. Recent studies show chatbots are already 20 percent more effective than humans at using facts and arguments to challenge deeply held beliefs. That’s impressive, but suppose we give the bot a bit of personal information about the subject—say, their age, gender, or education level. The bot then performs twice as well as humans with the same information.

There’s more. AI bots now interact with human-like sensory skills, picking up on tone, body language, and other cues to gauge a speaker’s emotional state. This allows them to carefully tailor the tone, style, and content of their responses. They’re not just thinking about what to say, but how to say it, adapting their language to our reactions in real time.

With this kind of personalization, bots are becoming adept at persuasion faster than we can test or regulate their capabilities. This raises an important question: How might people use these bots to shape public views?

Bots are programmed to do as they’re told, so their methods depend entirely on the goals of those controlling them. Programmed respectfully and empathetically, they could be immensely helpful, say, in supporting aging or ill individuals. But if their goal is to sell a defective car or push a conspiracy theory, they’ll use disinformation, deception, and emotional manipulation to get the job done.

Now, imagine millions of bad bots interacting with people worldwide. Frank’s Vicious Cycle of Disinformation is now Cambridge Analytica on steroids.

But let’s not panic. These tools have the capacity for great good or great harm. The question is, will we shape them to support or divide our society? 

Conclusion

We are optimistic. With the right approach, Canadians—and democracies worldwide—can halt the cycle of disinformation, but this requires action that is grounded in the right plan. In our view, such a plan has two basic steps.

  1. AI Literacy: First, Canadians must deepen their understanding of how disinformation works and how to counter it, especially the increasingly sophisticated forms driven by AI and algorithms. While AI raises new risks for social cohesion, it also provides promising tools to combat disinformation and foster unity. Democracies must make AI work for them, rather than against them.
  2. Individual Opportunity: Second, addressing disinformation requires tackling the social and economic conditions that make people vulnerable to manipulation. Governments must address root causes like economic inequality and wealth concentration to ensure that citizens have real opportunities for productive and meaningful lives. Democracy is nothing without individual opportunity.

These steps are ambitious but crucial. Canadians from all walks of life are calling for solutions to the threats posed by disinformation and polarization. Democracies are, it seems, at a tipping point. Can we mobilize the leadership and public will to make this vision a reality?

Don Lenihan PhD is an expert in public engagement with a long-standing focus on how digital technologies are transforming societies, governments, and governance. This column appears weekly. To see earlier instalments in the series, click here.

Frank Graves (guest columnist) is the president of EKOS Research and has been studying and publishing on this topic for the past decade. He is also an adjunct professor in the Department of Sociology and Anthropology at Carleton and was awarded an Honorary Doctor of Laws last spring.