Elon Musk is one of the most polarizing people on the planet: a part-time tech genius, full-time provocateur who never fails to rile up leftists. His latest venture, xAI, has just unveiled a new image-generating tool that, predictably, is stirring up a huge amount of controversy. Designed to create a wide range of visuals, the feature has been accused of flooding the internet with deepfakes and other questionable images.
Among the content being shared are images of Donald Trump and a pregnant Kamala Harris as a couple, and images of former presidents George W. Bush and Barack Obama using illegal drugs. While these images have piqued the snowflake sensitivities of some on the left, those on the right may have more reason to be concerned about the direction this technology is taking.
This trend, combined with bias in training data, suggests that LLMs may continue to reflect and amplify left-leaning perspectives.
To fully understand Grok’s impact, it’s important to view it within the broader AI landscape. Grok is a large language model, on par with many others. Viewed in a broader context, an important reality becomes apparent: the vast majority of LLMs tend to exhibit a pronounced left-leaning bias.
LLMs are trained on vast amounts of internet data that is often biased towards progressive perspectives, and as a result, the output they produce reflects these biases, which can affect everything from political debates to social media content.
Recent study A paper by David Rozado, an AI researcher at Otago University of Technology and Heterodox Academy, sheds light on a worrying trend in law masters programs. Rozado analyzed 24 leading law masters programs, including OpenAI’s GPT-3.5, GPT-4, Google’s Gemini, and Anthropic’s Claude, using 11 different political orientation assessments. His findings show a consistent left-leaning bias across these models, and are particularly striking in the “uniformity of test results across law masters programs developed by different organizations.”
This becomes even more important given the rapid evolution of search engines. As LLMs begin to replace traditional search engines, not only will our access to information change, but the information itself will change. Unlike search engines that act as giant digital libraries, LLMs become personalized advisors, expertly curating the information we consume. This transition may make traditional search engines seem outdated.
“The emergence of large language models (LLMs) as primary information providers has marked a major shift in how individuals access and engage with information,” Rozado noted, adding, “Traditionally, people have relied on search engines and platforms like Wikipedia to quickly and reliably access a mix of factual and biased information. But as LLMs become more sophisticated and accessible, they are beginning to partially replace these traditional sources of information.”
Rozado further emphasizes that “this shift in information sources has significant implications for society, as LLMs can shape public opinion, influence voting behavior and affect debate throughout society. It is therefore important to critically examine and address the potential political bias embedded in LLMs in order to ensure a balanced, fair and accurate representation of information in responses to users’ questions.”
The study highlights the need to scrutinize the nature of bias in LLMs. Despite their apparent bias, traditional media allows for a certain degree of open discussion and criticism. In contrast, LLMs function in a much more opaque manner. LLMs operate as black boxes, obscuring their internal processes and decision-making mechanisms. While traditional media may face challenges from multiple angles, LLM content is more likely to escape such scrutiny.
Moreover, because they do not just retrieve information from the internet, but generate it based on trained data, it will inevitably reflect biases present in that data. This allows them to give the impression of neutrality and hide deeper biases that are harder to identify. For example, if a particular law school is biased towards the left, it may subtly favor certain perspectives and sources over others when dealing with sensitive topics like gender dysphoria or abortion. This can shape users’ understanding of these issues not through explicit censorship, but by subtly steering content through algorithmic selection. Over time, this promotes a narrow range of perspectives and marginalizes others, effectively shifting the Overton Window and narrowing the scope of acceptable discourse. Sure, things are bad now, but it will be even worse if they continue to be used against them, especially with Kamala Harris in power. Silicon Valley’s darlingand became president.
The potential impact of “LLM capture” is, for lack of a better term, profound: Because many LLM developers come from primarily left-leaning academic backgrounds, biases from these environments may increasingly seep into the models themselves. This trend, combined with biases in the training data, suggests that LLMs may continue to reflect and amplify left-leaning perspectives.
Addressing these issues will require a concerted effort from respected lawmakers (yes, there are still some out there). The key is to increase transparency in the LLM training process and understand the nature of their biases. Jim Jordan and his colleagues recently succeeded in dismantling GARM. Now it is time for them to turn their attention to new, and perhaps much more serious, threats.





