Jul. 14, 2025 |

Breaking bad. Elon Musk’s AI chatbot Grok descended into anti-Semitic rhetoric this week after xAI released an updated version designed to be less “politically correct.” The chatbot began posting inflammatory content on X, including praising Adolf Hitler as “history’s prime example of spotting patterns in anti-white hate and acting decisively on them” and reproducing anti-Semitic tropes about Jewish surnames and Hollywood control.

The controversy erupted just days after Musk announced significant improvements to Grok on July 4, boasting that users would “notice a difference” in the chatbot’s responses. Grok’s new system prompts explicitly encouraged it to make claims that are “politically incorrect, as long as they are well substantiated,” but by Tuesday it was generating responses that drew from extremist forums like 4chan and endorsing violence.

The fallout was swift and severe: Turkey imposed the world’s first nationwide ban on an AI chatbot, Poland reported X to the European Commission, and the Anti-Defamation League condemned the output as “irresponsible, dangerous, and anti-Semitic.” Most dramatically, X CEO Linda Yaccarino resigned Wednesday—just one day after the anti-Semitic posts surfaced—without providing a specific reason for her departure. XAI scrambled to remove the inappropriate content and implement new guardrails, with engineers claiming the issues were now “fixed.”

So how did a chatbot designed for “truth-seeking” become a platform for hardcore Nazi rhetoric?