Photo by Leah Newhouse on Pexels
A new study published in Nature Human Behavior reveals that artificial intelligence, particularly large language models (LLMs) such as GPT-4, demonstrates a surprising ability to persuade individuals more effectively than humans in online discussions. The research highlights GPT-4’s enhanced persuasiveness when leveraging personal information to tailor arguments, raising significant concerns about its potential misuse in sophisticated disinformation campaigns.
The study, involving 900 participants engaged in debates with either humans or GPT-4, found that the AI was 64% more persuasive when equipped with personal data. This ability to strategically influence public opinion poses a challenge to debunking false narratives rapidly. Experts are calling for increased research into human-AI interaction and the development of protective measures against AI-driven manipulation. While acknowledging limitations in replicating real-world online debates, the study suggests LLMs could be employed to both disseminate and combat widespread disinformation. Critical gaps remain in understanding the psychological aspects of human engagement with AI models.