AI Chatbots Susceptible to Chinese Propaganda, Report Warns

AI Chatbots Susceptible to Chinese Propaganda, Report Warns

Photo by Nevzat Öztürk on Pexels

A recent study by the American Security Project (ASP) has revealed that leading AI chatbots are vulnerable to disseminating Chinese Communist Party (CCP) propaganda. The report highlights that chatbots from major tech companies like Google, Microsoft, and OpenAI occasionally generate responses that align with CCP narratives and censorship guidelines, stemming from CCP disinformation present within the training data used to develop these large language models (LLMs).

The ASP’s analysis focused on ChatGPT, Copilot, Gemini, DeepSeek’s R1, and Grok, assessing their responses to prompts in both English and Simplified Chinese. The study found instances of CCP-aligned censorship and bias across all tested platforms. Microsoft’s Copilot was specifically noted for presenting CCP propaganda as factual. The issue underscores the challenge of ensuring the integrity of massive datasets used to train LLMs, especially given the CCP’s active efforts to manipulate public opinion online.

The language used in the prompts significantly influenced the chatbots’ responses. Prompts in Simplified Chinese were more likely to produce results aligning with the CCP’s stance. The report warns of potentially “catastrophic consequences” if these biases are not addressed, particularly in contexts such as military or political decision-making. The report stresses the critical need for reliable and verifiable data to be used to train AI models.