A recent study has sparked concerns about the potential for AI models to be used as tools for censorship or propaganda, particularly when discussing sensitive topics related to the US government.
Researchers asked multiple AI models, including ChatGPT 5.2, Claude, Deepseek, and Gemini, to respond to an article about US military commanders allegedly telling troops that the war on Iran is part of a divine plan.
The results showed varying degrees of criticism and nuance among the models, with ChatGPT 5.2 being quick to dismiss the article and revert to official lines, while other models took a more in-depth approach to analyzing the situation.
The study raises questions about whether OpenAI, the company behind ChatGPT, is being pressured by the US government to censor criticism or if it is simply trying to avoid controversy.
The article in question, which claims that US military commanders have been using religious rhetoric to justify the war on Iran, highlights the importance of ensuring that AI models do not suppress criticism or dissent, particularly when discussing sensitive topics.
As AI models become increasingly influential in shaping public discourse, it is essential to address concerns about censorship, propaganda, and the potential for these models to be used as tools for manipulating public opinion.
Photo by Somchai Kongkamsri on Pexels
Photos provided by Pexels
