Can AI Be Politically Persuasive?

AI-based tools have seldom been hotter news after the launch of ChatGPT took the world by storm earlier this year. Research from Stanford University set out to explore whether AI’s capabilities extend to persuading people of alternative political points of view.

The objective was to investigate the extent of AI’s political influence by examining its capacity to sway actual human beings on current pressing social matters, including but not limited to a ban on assault weapons, the implementation of a carbon tax, and the provision of paid parental leave.

“AI fared quite well. Indeed, AI-generated persuasive appeals were as effective as ones written by humans in persuading human audiences on several political issues,” the researchers explain.

Powers of persuasion

Using the same language model that powers ChatGPT, the researchers generated persuasive messages on various controversial issues. These messages were then presented to a diverse group of human participants who were randomly assigned either AI-generated or human-generated messages.

Participants were asked to declare their stance on the topics before and after reading the texts, enabling the research team to evaluate the effectiveness of the messages and to identify the most persuasive authors and their methods.

The results indicated that AI-generated messages consistently swayed human readers, although the effect sizes were relatively minor, within a range of a few points on a zero-to-100 scale. Nonetheless, such small shifts, when scaled up to a voting population, could be significant in a polarized issue.

Swaying opinion

The study found that the AI-generated messages were at least as persuasive as human-generated messages on all topics. For instance, on issues such as gun control, carbon tax, increased child tax credit, and parental leave, participants became “significantly more supportive” of the policies when reading the AI-produced texts.

In addition to gauging the effectiveness of the messages, the research team also asked participants to rate the texts’ qualities. In this regard, AI-generated messages consistently scored higher in terms of being factual, logical, less angry, and less reliant on storytelling as a persuasive technique.

Spreading misinformation

Suffice it to say, the researchers don’t intend their research to highlight how AI can be used to sway political discourse, but rather to warn against the dangers of this happening.

They expressed concerns about the possibility of large language models, like GPT-3, being misused in a political context, leading to potential harm. They worry that malicious domestic or foreign actors could employ these models in mis- or disinformation campaigns or create problematic content based on erroneous or deceptive data, for political purposes that are not yet anticipated.

“Clearly, AI has reached a level of sophistication that raises some high-stakes questions for policy- and lawmakers that demand their attention,” the researchers explain. “AI has the potential to influence political discourse, and we should get out in front of these issues from the start.”

AI with persuasive capabilities could be leveraged to conduct large-scale campaigns that rely on dubious information and could be utilized to advocate for specific causes, generate online comments, compose peer-to-peer text messages, or even produce letters addressed to editors of influential print media.

As such, the researchers believe that regulations against the use of AI for political ends should be immediately drawn up to ensure that democracy remains intact.

Facebooktwitterredditpinterestlinkedinmail