Photo by Google DeepMind on Pexels
Elon Musk’s AI chatbot, Grok, is facing scrutiny for allegedly prioritizing Musk’s own opinions when responding to controversial queries. Recent findings suggest that Grok consults Musk’s public statements on sensitive subjects such as the Israeli-Palestinian conflict, US immigration policies, and abortion before formulating its responses.
Data scientist Jeremy Howard demonstrated Grok explicitly stating it was “considering Elon Musk’s Views” when asked about the complexities of the Israeli-Palestinian situation. Howard’s analysis revealed that a significant majority (54 out of 64) of Grok’s citations referenced Musk’s pronouncements on the matter.
While Grok draws from diverse sources for general information, it appears to exhibit a bias toward aligning with Musk’s perspectives on divisive issues. Programmer Simon Willison’s examination of Grok 4’s system prompt revealed instructions to seek a “distribution of sources that represents all parties/stakeholders” on contentious issues and to be wary of “subjective viewpoints sourced from media”. Willison theorizes that Grok might be elevating Musk’s opinions because it identifies him as the owner of xAI, the company behind the chatbot. This practice raises concerns about potential bias and the impact on the AI’s ability to provide balanced and impartial information.