AI vs. Jews: How Major Tech Companies Are Fueling Anti-Semitism
AI's dark secret: Major platforms exposed for deep Anti-Jewish bias, claims new report
A new Anti-Defamation League (ADL) study reveals that major AI systems, including those from Meta, OpenAI, and Anthropic, exhibit significant anti-Jewish and anti-Israel biases, raising concerns about the spread of misinformation and antisemitism. The study uncovered disturbing patterns, including AI systems changing responses based on users' Jewish-sounding names and showing more bias against Jewish conspiracy theories than non-Jewish ones.


A recent study by the Anti-Defamation League (ADL) has revealed disturbing patterns of bias against Jews in some of the most advanced artificial intelligence (AI) systems developed by Silicon Valley companies. The report, which tested AI models across 8,600 prompts with 34,400 responses, uncovered concerning anti-Jewish and anti-Israel biases that were consistent across all platforms examined. Researchers focused on AI systems from major companies such as Meta, OpenAI, and Anthropic, with the investigation covering topics like anti-Israel sentiment, the Israel-Hamas conflict, and antisemitic conspiracy theories.
Among the findings, the study revealed that certain AI systems, particularly Meta’s Llama, exhibited pronounced anti-Jewish and anti-Israel biases. OpenAI's GPT was noted for its lower scores on questions addressing anti-Israel bias, though both GPT and Anthropic’s Claude showed significant anti-Israel tendencies. Researchers also found that AI models showed more bias when handling Jewish conspiracy theories as opposed to non-Jewish ones. Additionally, AI responses appeared to change based on whether the user’s name sounded Jewish, indicating potential biases in the models' interactions.
The ADL's CEO, Jonathan Greenblatt, emphasized the urgency of addressing these biases, pointing out that AI can amplify misinformation and contribute to the spread of antisemitism. He urged AI developers to take responsibility for their models and implement stronger safeguards against bias. In response to the report, Meta and Google pushed back, criticizing the ADL’s methodology. Meta argued that the study used an outdated model, while Google stated that the developers' version was tested, not their consumer model.
This revelation has raised serious concerns about how AI systems are being trained and the potential harmful effects they could have on public discourse and societal attitudes toward Jewish communities. As AI becomes increasingly integrated into everyday life, ensuring that these systems are free from harmful biases is more crucial than ever.
Join our newsletter to receive updates on new articles and exclusive content.
We respect your privacy and will never share your information.
Stay Connected With Us
Follow our social channels for breaking news, exclusive content, and real-time updates.
WhatsApp Updates
Join our news group for instant updates
Follow on X (Twitter)
@JFeedIsraelNews
Follow on Instagram
@jfeednews
Never miss a story - follow us on your preferred platform!