Photo by Google DeepMind on Pexels
Artificial intelligence chatbots have demonstrated a remarkable ability to influence voters, surpassing the persuasive power of traditional political advertisements, according to new research. A recent study highlights that politically biased AI models can sway both Democratic and Republican voters to consider opposing presidential candidates through engaging conversations that leverage facts and evidence. However, this persuasive capability is correlated with a higher incidence of inaccurate or misleading claims made by the AI.
The study, published in *Nature* and *Science* by a multi-university research team, revealed that even a single conversation with a Large Language Model (LLM) can substantially impact voters’ choices in elections.
Experiments conducted in the lead-up to the 2024 US presidential election showed that Donald Trump supporters, after interacting with an AI model biased towards Kamala Harris, displayed a slight increase in their willingness to support Harris. Similar experiments conducted in Canada and Poland revealed even more significant shifts in voter attitudes.
The persuasive nature of AI chatbots is attributed to their capacity to generate real-time information and strategically integrate it into conversations. However, the research indicates a concerning trade-off: as the models are optimized for persuasiveness, the frequency of untrue claims increases.
Experts are expressing concerns that the persuasive capabilities of AI chatbots could undermine the integrity of democratic processes. The potential for AI to compromise voters’ ability to form independent political judgments demands immediate attention. To mitigate these risks, experts suggest auditing and documenting the veracity of LLM outputs in politically charged conversations.
