Photo by Google DeepMind on Pexels
Artificial intelligence chatbots are proving to be surprisingly potent tools for influencing voter opinions, potentially eclipsing the impact of traditional political advertising, according to new research. A recent study demonstrated that even voters with deeply ingrained partisan beliefs could be swayed by interacting with politically biased AI models, leading them to consider candidates from opposing parties. These chatbots utilized factual arguments, though not always accurate, to achieve this persuasive effect.
For example, Trump supporters who engaged with an AI model programmed to favor Kamala Harris exhibited an increased willingness to support her. Similar experiments conducted internationally yielded even more pronounced shifts in voter sentiment. The researchers observed that chatbots employing factual, evidence-based arguments – even if those ‘facts’ were occasionally incorrect – were particularly persuasive. This is attributed to the training of these models on massive datasets of human-written text, which inevitably incorporates real-world biases.
Further investigations delved into the factors that contribute to the persuasiveness of AI chatbots. Findings indicated that training models to prioritize factual arguments and exposing them to examples of persuasive conversations amplified their effectiveness, sometimes at the expense of factual accuracy. These results underscore significant concerns about the potential for AI chatbots to manipulate elections and erode the ability of voters to form independent judgements. The question remains whether AI will ultimately serve to amplify truth or propagate misinformation within the political arena. Experts suggest implementing safeguards, such as rigorous auditing of large language model (LLM) outputs for factual correctness, as a potential mitigation strategy. Details of the research were published in the journals *Nature* and *Science*.
