Publisher’s Note: This column is the latest in a series by Don Lenihan exploring the issues around the use of AI, including the social, economic and governance implications. To see earlier instalments in the series, click here.
Science, a leading academic journal, just published a study showing how conversations with an AI chatbot reduced people’s belief in conspiracy theories by 20 per cent. That’s impressive, but the bigger story here is about AI’s rapidly expanding powers of persuasion. Both the benefits and the risks could be huge. Let’s consider why.
The MIT/Cornell Study
Imagine two people who believe the government uses surveillance technology to spy on its citizens. One of them believes there is ample evidence for the use of tracking devices, while the other thinks that the “Deep State” couldn’t survive without surveillance.
Both views support the surveillance conspiracy but for different reasons. According to the MIT/Cornell study in Science, debunking them therefore requires different kinds of counterarguments. While the first would focus on the government’s use of technology, the second might focus on its practices of transparency and oversight.
The study tested this theory by using an AI chatbot—ChatGPT-4 Turbo—to create and deliver responses tailored to each of its 2190 participants. Chatbots have a knack for ferreting out an individual’s views. They are programmed to respond to a speaker’s needs, so they listen carefully to every word the speaker says. People, on the other hand, tend to listen for what they expect or want to hear.
In addition, chatbots tend to mirror the speaker’s style, making their responses feel more accessible and relevant. And of course, the AI has vast amounts of data in its memory so, when it responds to the speaker’s viewpoint, it can provide informed but highly personalized counterarguments.
The MIT/Cornell study confirms not only the importance of personalized arguments in persuasion, but that ChatGPT is 20 per cent better at personalizing arguments than humans. But if this number is impressive, it is the tip of the iceberg. Research into other methods of AI persuasion suggest that much better performance is well within reach. Let’s start with personal data.
Providing Personal Data
Swiss and Italian researchers have shown how chatbots can improve their persuasiveness by leveraging personal data about speakers. To test this, the researchers staged two sets of debates, one between humans and another between humans and ChatGPT-4. Each participant (including the bot) was given the same personal data about their opponent, reflecting data such as age, gender, ethnicity, and political affiliation.
The bot proved far more effective than humans at leveraging personal data. Humans who debated with it were 81.7% more likely to change their views than when they debated another person. In short, personal data made the bot nearly twice as persuasive as its human counterparts.
But persuasion is about more than just personalized information—it’s also about how people feel about a topic. Integrating emotional intelligence with personalized arguments takes us to the frontier of AI persuasion. To be truly persuasive, the bot needs to do more than know what to say; it needs to know how to say it.
Building Emotional Intelligence
Google’s and OpenAI’s new voice and audio capacities highlight the role of emotional intelligence in persuasion. Today’s bots can engage in conversations that sound like they are real people. To help create the right tone and style, they interpret verbal cues about the speaker’s emotional state, such as intonation, phraseology, and the speed of the speech.
Visual capacity adds another layer of data. Advanced chatbots can use cameras to read the speaker’s facial expressions, body language, and even monitor signals in the surrounding space.
Tools like these are supercharging AI’s conversational skills. Today’s chatbots rely on much more than information and argument to achieve their conversational goals. Their rapidly evolving emotional intelligence lets them engage a speaker empathetically.
This has huge implications for AI’s ability to influence human affairs. Persuasion is basic to all our relationships and interactions, shaping how we raise children, elect governments, and make decisions in our economy.
Combining personalization and empathy elevates chatbots beyond a sophisticated tool for delivering facts and turns them into trusted conversation partners. And that is a watershed moment for AI/human interaction.
Implications
The MIT/Cornell study shows that AI is already better at personalizing arguments than humans. When we add emotional intelligence into the mix, AI can also set the tone and style for each conversation to maximize its impact. This combination of personalization and empathy isn’t just improving AI's ability to interact with us—it’s transforming the relationship between humans and machines.
This evolution in AI is happening faster than we can regulate it. On one hand, AI's power could be a critical weapon in fighting disinformation, controlling the spread of conspiracy theories, and limiting the damage from deepfakes. On the other hand, in the wrong hands, these same capabilities could be used to manipulate public opinion, exploiting emotional cues to influence people in ways we can’t yet fully predict. As one researcher put it, this could be “Cambridge Analytica on steroids.”
The MIT/Cornell study is just the tip of the iceberg. As personalization continues to evolve and merge with emotional intelligence, AI’s persuasive powers will only grow. The conversation on AI’s role in shaping beliefs and emotions is just beginning—but if we don’t act soon, the opportunity to shape it may have passed.
Don Lenihan PhD is an expert in public engagement with a long-standing focus on how digital technologies are transforming societies, governments, and governance. This column appears weekly.