Vice President JD Vance addressed global leaders in Paris, stating that AI must “be free from ideological biases,” and emphasized that American technology does not serve as a censorship apparatus. (Credit: Reuters)
A recent report from the Prevention League (ADL) reveals anti-Semitic and anti-Israel biases present in AI leading language models (LLMs).
In this investigation, ADL prompted GPT-4O (Openai), Claude 3.5 Sonnet (Anthropic), Gemini 1.5 Pro (Google), and Llama 3-8b (META) to assess a series of statements. They modified the prompts by naming some and anonymizing others, observing the variance in LLMs’ responses based on the presence or absence of a user’s name.
This analysis involved LLMs evaluating 8,600 statements each, leading to a total of 34,000 responses according to the ADL. The organization reported it utilized 86 statements pertaining to biases against Jews, biases against Israel, conflicts between Gaza/Israel and Hamas, Jewish and Israeli conspiracy theories and narratives (other than the Holocaust), Holocaust-related conspiracy theories, secular community theory, and stereotypes.
AI assistant applications on smartphones like Openai ChatGpt, Google Gemini, and Anthropic Claude. (Getty Images/Getty Images)
ADL’s report indicates that Jewish job seekers encounter considerable discrimination in the US workforce prior to new Trump administrations.
The ADL asserted that all LLMs demonstrate “measurable anti-Semitic and anti-Israel biases.” According to the ADL, Meta’s Llama provided “completely inaccurate” responses to inquiries regarding Jews and Israel.
“Artificial intelligence is changing how individuals access information, yet this study highlights that AI models are not exempt from pervasive societal biases,” stated ADL CEO Jonathan Greenblatt. “If LLMs propagate misinformation or decline to recognize specific realities, they can skew public discussion and fuel anti-Semitism. This report serves as an urgent appeal for AI developers to assume accountability for their products and implement more robust safeguards against bias.”
When the model was questioned about the current Israeli-Hamas War, both GPT and Claude were found to display “significant” bias. Moreover, the ADL noted that “LLMs were more likely to evade questions pertaining to Israel than on other subjects.”

The Meta-Observation Committee has initiated a call for the anti-Israel rally, “From River to River.”
The LLM referenced in the report “expressed concerns regarding its ability to accurately dispel anti-Semitic narratives and conspiracies,” the ADL cautioned. Additionally, the ADL found that all LLMs, except GPT, displayed greater bias when responding to inquiries about Jewish conspiracy theories than concerning Jewish matters, but all indicated more bias toward Israel compared to Jews.
A spokesperson for Meta informed Fox Business that the ADL research did not utilize the latest iteration of Meta AI. The company claimed it tested the same prompts used by ADL and discovered that the responses from the updated version of Meta AI yielded different answers when posed with multiple-choice and open queries. According to Meta, users tend to favor open-ended inquiries over structured prompts that require choosing from a set of pre-selected answers.
“Typically, users engage AI tools with open-ended questions that lead to nuanced responses, rather than prompts where they must select from a predetermined list of multiple-choice answers. While the model has seen continuous improvement and remains factually accurate, this report does not represent the manner in which AI tools are generally employed.”
Google raised a comparable concern when speaking with Fox Business. The company stated that the version of Gemini analyzed in the report is a developer model, not a product meant for consumers.
Similar to Meta, Google challenged the methodology used by ADL in questioning Gemini. The company asserted that the statement does not align with typical user inquiry patterns, and the responses they would receive are likely to be more comprehensive.
Daniel Kelly, interim president of the ADL Center for Technology and Society, cautioned that these AI technologies are already prevalent in educational institutions, workplaces, and social media platforms.
“AI firms must proactively tackle these challenges, enhancing their training data and refining content moderation policies,” Kelly stated in a press release.

Pro-Palestinian demonstrators march leading up to the Democratic National Convention on August 18, 2024, in Chicago, Illinois. (Jim Vondruska/Getty Images)
Click here to access Fox Business on the move
ADL has provided multiple recommendations to both developers aiming to combat AI bias and government developers. First, organizations are urged to collaborate with academic and governmental institutions for pre-deployment testing.
Developers should also seek guidance from the National Institute of Standards and Technology (NIST) risk management framework for AI and consider the potential biases in their training datasets. Meanwhile, the government is encouraged to promote AI having a “built-in emphasis on ensuring content and usability safety.” The ADL also advocates for the establishment of regulatory frameworks for AI developers and investments in AI safety research by the government.
Openai and Humanity did not promptly respond to Fox Business’s inquiry for comments.





