Recent academic research has uncovered evidence that AI chatbots powered by large-scale language models (LLMs) have an inherent left-leaning political bias that can affect the information and advice they provide to users.
Science Alert Reports In an increasingly digital world, AI chatbots are becoming a go-to source of information and guidance, especially for young people, but new research from computer scientist David Rozado of the Otago University of Technology in New Zealand reveals that these AI engines may have a political bias and unwittingly influence society's values and attitudes.
Research published in academic journals ProSonetested 24 different LLMs, including popular chatbots such as OpenAI's ChatGPT and Google's Gemini, using 11 standard political questionnaires such as The Political Compass test. The results showed that the average political stance across all models was not neutral, but rather left-leaning.
This will come as no surprise to anyone who has followed AI closely: Google Gemini, for example, had a wild initial launch and rewrote history into a left-wing delusional mess.
While the average bias was not strong, it was still significant.Further experiments with custom bots, which allow users to tweak LLM's training data, demonstrated that these AIs could be influenced to express political leanings using center-left or center-right text.
Rozado also looked at underlying models such as GPT-3.5, which is the basis for conversational chatbots. These models found no evidence of political bias, but without a chatbot front end, it was difficult to synthesize responses in a meaningful way.
As AI chatbots replace traditional sources of information like search engines and Wikipedia, the societal impact of embedded political bias will grow. As big tech companies like Google embed AI answers into their search results and more people turn to AI bots for information, concerns are growing that these systems may influence users' thinking through the responses they provide.
The exact cause of this bias is unclear. One possible explanation is that the vast amount of online text used to train these models is biased towards left-leaning content. Additionally, the bot has previously been shown to hold left-of-center political views, which may also be a contributing factor to ChatGPT's dominance in training other models.
It is important to note that bots based on LLM determine the order of words in a response based on probability, which can lead to inaccuracies even before accounting for different types of bias.
Despite tech companies’ enthusiasm in promoting AI chatbots, it may be time to reevaluate how this technology should be used and prioritize areas where AI can truly help. In his paper, Rozado emphasizes that “it is important to critically examine and address the potential political bias embedded in LLMs to ensure a balanced, fair and accurate representation of information in responses to users’ questions.”
Learn more Click here for ScienceAlert.
Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship.





