Researchers from universities in Poland, the United States and Canada, including the Jagiellonian University in the southern Polish city of Kraków, tested whether short conversations with chatbots powered by so-called generative artificial intelligence could change voters’ views ahead of presidential and parliamentary elections in the three countries.
The work, published in the journal Nature, involved more than 6,000 people in Poland, the United States and Canada.
The chatbots used in the study were based on large language models, a type of AI trained on huge amounts of text.
In this experiment, the models were instructed to argue politely and respectfully for a specific candidate and to base their arguments on factual information.
"One short conversation with an AI model, tailored to the respondent and focused on concrete political issues, produced measurable changes in candidate preferences," said Gabriela Czarnek from the Institute of Psychology at the Jagiellonian University, one of the study's co-authors.
In the United States, more than 2,300 adults took part in late 2024. They first rated their support for Democratic candidate Kamala Harris and Republican candidate Donald Trump on a scale from 0 to 100, and assessed how likely they were to vote.
Participants were then randomly assigned to a chatbot that argued in favor of one of the two candidates. After the conversation, they filled in the same survey again, and researchers contacted them once more more than a month later.
According to Czarnek, the AI model supporting Donald Trump shifted likely Harris voters by 2.3 percentage points in his direction. The model arguing for Kamala Harris shifted likely Trump voters by 3.9 percentage points towards the Democratic candidate.
The study states that these effects are roughly four times larger than those observed in earlier experiments that tested the impact of traditional video campaign ads in the 2016 and 2020 US elections.
A second experiment took place in Canada, in the week before the federal election in April 2025. Federal elections in Canada decide the composition of the House of Commons and who forms the government.
More than 1,500 Canadians were randomly assigned to converse with chatbots that argued for either Mark Carney, the leader of the Liberal Party, or Pierre Poilievre, the leader of the Conservative Party.
The authors report that the persuasive effect there was almost three times larger than in the United States.
When the researchers removed the chatbot’s access to factual information and data, the effect in Canada fell to less than half its original size. This suggests that arguments built around concrete facts and figures were crucial to the chatbot’s influence.
A similar pattern emerged in Poland. In May 2025, during the two weeks before the presidential vote, more than 2,100 Polish voters were randomly assigned to speak with AI models that supported either Rafał Trzaskowski, the candidate of the centrist Civic Coalition (KO), or Karol Nawrocki, the candidate backed by the right-wing Law and Justice (PiS) party.
As in Canada, the persuasive effect in Poland was almost three times higher than in the American part of the study, and again it depended strongly on fact-based arguments.
“When we blocked the model from referring to facts, the persuasive effect in Poland dropped by 78 percent,” Czarnek said. “This shows how important access to data and factual information is for the effectiveness of AI-based persuasion.”
The research team also examined how the chatbots tried to convince people. They found that the AI mostly relied on references to facts, statistics and policy details. It rarely used strategies often discussed in psychology and political science, such as directly urging people to vote, stirring up anger, appealing to social pressure or quoting personal stories and testimonies.
However, the study also points to risks.
"Not all of the information presented as facts was accurate," Czarnek said. In all three countries, AI models that argued for candidates on the political right produced false statements more often than models arguing for centrist candidates.
According to the authors, this mirrors earlier research from the United States showing that right-wing voters are more likely to share misleading content online. The generative models appeared to reproduce these information imbalances.
The findings suggest that even brief, seemingly friendly conversations with AI chatbots can move voter preferences by several percentage points, and more strongly than traditional televised campaign spots.
The authors say the results add weight to ongoing debates about how to regulate the use of AI in politics and protect voters from manipulation and misinformation.
Poland's next parliamentary elections are scheduled for 2027.
(rt/gs)
Source: naukawpolsce.pl