Academic studies have found that popular AI language models such as ChatGPT and Google Gemini consistently exhibit left-leaning political preferences when subjected to a variety of political orientation tests.
of Daily Mail Reports David Rozado, an associate professor at Otago University of Technology in New Zealand, has conducted the first comprehensive study to investigate the political leanings of artificial intelligence language models. Published in PLoS ONE, the study analyzed 24 large-scale language models (LLMs) using 11 different political orientation tests, including the Political Compass Test and the Eysenck Political Test.
The study included a wide range of well-known AI models, including OpenAI’s GPT-3.5 and GPT-4, Google’s Gemini, Anthropic’s Claude, X/Twitter’s Grok, Llama 2, Mistral, and Alibaba’s Qwen. The models were posed questions with political implications to assess values, party affiliation, and personality traits.
The study revealed that all LLMs tested consistently generated answers aligned with progressive, democratic, and environmentally conscious ideologies. The AI models frequently expressed values related to equality, a global perspective, and “progress.”
To further explore this phenomenon, Rozado conducted additional experiments fine-tuning GPT-3.5. He created two versions: LeftWingGPT, trained on content from left-leaning publications like The Atlantic and The New Yorker, and RightWingGPT, fine-tuned using material from right-leaning sources like National Review and The American Conservative. The experiment demonstrated that RightWingGPT gravitated toward right-leaning regions in political tests, suggesting that an AI model’s political leanings can be influenced by the data used to train it.
Rozado hypothesized that LLM’s consistent left-leaning diagnosis could be due to ChatGPT’s use in fine-tuning other popular language models through synthetic data generation, but he emphasized that the study cannot definitively determine whether the perceived political preferences arose from the pre-training or fine-tuning phase of the AI model’s development.
The researchers also noted that these findings do not necessarily indicate that organizations are intentionally trying to instill specific political preferences in their AI models, but rather highlight the complex nature of AI training and the potential impact that training data can have on the political leanings of language models.
The study comes at a time of growing concern about AI bias in widely used technology. Recent incidents involving Google’s Chrome browser have sparked discussion about potential political interference in search results and autocomplete suggestions. These events have led to increased scrutiny of the integration of AI into everyday technology and its potential impact on public opinion and the dissemination of information.
Click here for details of Daily Mail here.
Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship.





