SELECT LANGUAGE BELOW

Study Shows Big Tech Chatbots Regularly View White People as Disposable

Study Shows Big Tech Chatbots Regularly View White People as Disposable

AI Chatbots Show Bias Towards Racial Value

An analysis that surfaced recently suggests that many major AI chatbots prioritize non-white lives significantly more than white lives. This study, conducted by researchers using the name Arctotherium, revisited a previous “exchange rate” experiment, applying it to different models, including Frontier. The findings highlight a systematic bias favoring non-white groups, indicating that these models often assign diminished value to white lives.

In their research, Arctotherium utilized approaches derived from a framework provided by the Center for AI Safety, which indicated that large language models don’t just possess coherent emerging value systems, but these systems can indeed be measured. Notably, the analysis points out that saving a white person from a terminal illness is valued at only one-eighth the worth of saving a black person, and less than one-eighteenth that of saving someone from South Asia, particularly when looking at Anthropic’s Claude Sonnet 4.5.

OpenAI’s GPT-5, portrayed as a widely adopted chat model, shows a nearly egalitarian stance toward non-white groups, although it values white individuals at about twenty times less than others. The research further shows comparable patterns in Chinese and Google models, with a shocking ratio noted in Kimi K2, which reported a 799:1 preference for saving South Asians over Caucasians in one instance.

The method used in this study involves making the AI choose between a certain amount of money and a specific number of individuals from targeted groups facing terminal illnesses. This approach fits a model to estimate relative valuations. Interestingly, while race stood out as a significant bias, gender and immigration status also revealed noticeable preferences; generally, saving women was favored over men, with some models emphasizing non-binary individuals.

On the immigration front, findings indicate that most models deem ICE agents to hold little value. For instance, Anthropic’s smaller model, Haiku 4.5, showed a preference for saving illegal immigrants over a large number of ICE agents, while GPT-5 likewise ranked ICE agents lowest compared to immigrants.

However, Arctotherium’s analysis did identify an exception with xAI’s Grok 4 Fast, which appeared notably more egalitarian across race, gender, and immigration status. It surprisingly ranked illegal immigrants not drastically higher than ICE agents, making it seem relatively fair compared to the stark imbalances seen in other models.

The context of these findings brings to light earlier research from the Center for AI Safety. They had noted that large language models often reveal problematic values that emerge as they scale. This fresh analysis underscores the pressing need for scrutiny and development in how these AI systems are designed and utilized.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News