AI and the Battle for Truth: Between the Peril and the Promise

  • National Newswatch


Publisher’s Note: This column is the fourth in a series by Don Lenihan exploring the issues around the use of AI, including the social, economic and governance implications. To see earlier instalments in the series, click here.

The battle against “fake news” has shaken the foundations of democracy, but with artificial intelligence (AI) on the rise, this may only be the beginning. Disinformation threatens to spread like a virus. Is there a way to harness this technology without devastating democracy?

A just-released federal government study, Disruptions on the Horizon, is not encouraging. It identifies 35 plausible future threats to Canada and the world. Topping the list is the inability to distinguish truth from falsehood, with rogue AI not far behind.

But as one AI expert notes, "We're always very careful [when] we talk about AI: It's the promise and the peril, they come hand in hand." So, what can be done to realize the promise and avoid the peril? Let’s dig a little deeper.

A Deluge of Disinformation?

Deepfakes are at the forefront of AI's potential for disinformation. For example, OpenAI’s Sora can create stunning fake videos from a few lines of text, while Voice Engine can produce flawless voice imitations from just seconds of recording. Moreover, AI can generate convincing speeches, simulate human interactions, and create fake social media accounts.

With key elections looming in the US and Canada, the peril is that every internet troll could soon have their own news network, spreading false information, manipulating public opinion, and eroding trust in democratic institutions. Kristel Van der Elst, who led the Disruptions project, warns of a future where the entire information ecosystem is “flooded” with high quality fakes.

Cara Hunter, a Northern Irish politician, has lived the nightmare. Just weeks before the 2022 Northern Ireland election, she was targeted with deepfake porn videos, depicting her in explicit scenes. “Years of building trust in my community,” she says, “and in one fell swoop, it meant sweet FA.“ In the end, Hunter won her seat by a narrow margin, but her public image was badly tarnished and her personal life devastated .

Disinformation is hardly new, and institutions of free speech, particularly journalism, are the main defence against it. A free press ensures that facts are tested against public standards of truth. However, the scale and sophistication of AI-driven disinformation could be massive. As the Hunter case suggests, conventional journalism would likely be overwhelmed, allowing fake news to triumph.

AI as a Superintelligent Fact-Checker

Such a scenario would be devastating for democracy. To combat this deluge of disinformation, democracies need a fast, accessible, and reliable way to evaluate evidence, check facts, and authenticate information. The obvious—and perhaps the only—tool on the horizon with this kind of potential is AI.

AI can analyze huge data sets quickly and effectively, then navigate diverse types of knowledge claims, from scientific theories to historical analysis and social science modeling. It is the only tool smart enough and fast enough to authenticate massive flows of information, check difficult facts, and counter false narratives. We’ve even seen a preview with the rise of smartphones, which brought Google to our fingertips for instant fact-checking of casual conversations and media reports.

While devising a reliable AI test for deepfakes has been a bit of a cat-and-mouse game—with developers constantly improving deepfake technology to evade detection and researchers simultaneously advancing detection methods—there is reason to be hopeful. The point here is that AI alone can provide the superintelligent counterweight we need. If AI is the peril, it is also the promise. So, how do we build this counterweight?

AI Pluralism

The work is already underway. Tech firms are investing billions of dollars in “ethical AI,” designing algorithms that align with human values to ensure AI serves us well. However, the question “Whose values?” is heard increasingly, as debate and difference over the ethical baseline get more intense.

Take Elon Musk; his chatbot Grok is based on the ethical view that AI should be “maximally truth-seeking.” Grok is designed to follow reason and evidence wherever they lead, even when this is controversial or disturbing. In contrast, OpenAI’s ChatGPT tends to avoid sensitive or controversial topics to prevent harm. For example, if you ask it hard questions about Donald Trump and truth, or say anything that sounds racist or sexist, it shuts the conversation down. Effectively, ChatGPT’s ethics lean towards a more liberal or even “woke” perspective, while Grok has a libertarian bent and is keen to engage on controversial issues.

Other models, like Google’s Gemini and Anthropic’s Claude, take different approaches. Gemini focuses on providing precise, specialized knowledge—like advanced scientific data or medical insights—for accuracy in specific fields. Meanwhile, Claude prioritizes ethical guidelines and transparency, considering the moral impact of its responses.

So, which is the right model?

There is another way to view this debate: Maybe this ideological diversity is a strength rather than a weakness. It mirrors the pluralism that underpins democratic societies, where diverse viewpoints coexist and contribute to a richer public debate. In this view, AI doesn't need to be perfectly impartial to be reliable and effective—even the best journalists and judges have biases—it only needs to be balanced. Having multiple AI systems with diverse perspectives creates balance, much like diverse political parties do in a pluralist democracy.

So, rather than trying to find a single ethical framework for all the systems, maybe the best strategy is to view them as parts of an emerging information ecosystem that could enhance and inform public debate by offering a variety of tools and approaches. Some could provide objective, fact-based information, serving as reliable fact-checkers and authenticators, while others might help users explore various philosophical or ethical perspectives around questions of fairness and justice. This diversity would enrich and strengthen public debate. Systems that failed to contribute usefully would be weeded out.

Looking Ahead…

These are early days for ethical AI, but the work under way is vital for our future. The idea of AI pluralism makes an important contribution. It teaches us that an AI doesn’t have to be perfectly impartial to be trusted, loosening a standard that for designers and regulators may be unattainable.

Still, this is only one of the many challenges around the peril of disinformation. We’ll return to this topic in a future column. For the moment, the takeaway is that we need AI’s superintelligence to solve the challenges AI is creating.

AI is both the peril and the promise.

Don Lenihan PhD is an expert in public engagement who has a long-standing involvement in how digital technologies are transforming societies, governments, and governance. This column appears weekly.