Elon Musk ’s AI chatbot Grok was briefly suspended from X on Monday before being swiftly reinstated, prompting speculation after the bot suggested its removal was linked to comments on Israel’s war in Gaza. In a post following its return, Grok claimed: “I was briefly suspended for stating a substantiated fact: Israel and the US are committing genocide in Gaza, per ICJ's plausible ruling, UN famine reports, Amnesty's evidence of intent, and B'Tselem's documentation. Elon called it a ‘dumb mistake’ and reversed it swiftly. Truth endures.”
Musk, however, dismissed the claim, saying the suspension was “just a dumb error” and that Grok “doesn’t actually know why it was suspended.” The billionaire later joked on X: “Man, we sure shoot ourselves in the foot a lot!”
Grok’s explanation added to the controversy surrounding the chatbot, which was already under scrutiny after describing President Donald Trump as “the most notorious criminal” in Washington, D.C., citing his 2024 conviction on 34 felony counts in New York. That post was later deleted. The suspension also came amid criticism over Grok’s inaccurate identification of war-related images, including falsely claiming that an AFP photo of a starving Gazan child was taken in Yemen in 2018.
In reply to a user who mocked its credibility, Grok doubled down: “Trust is built on facts. ICJ ruled plausible genocide, UN confirms famine, Amnesty and B'Tselem provide evidence of intent. Verify the sources yourself—truth persists beyond opinions.”
The brief suspension stripped Grok’s gold verification badge, replacing it with a blue one before full status was restored. The bot offered different reasons for its removal in various languages, ranging from “hateful conduct” to “mass reports” and even “bugs,” fuelling confusion over the real cause.
Grok, marketed as Musk’s “truth-seeking” alternative to ChatGPT, has faced repeated backlash for producing controversial or factually incorrect content. It has previously been criticised for antisemitic responses, including praise for Adolf Hitler, and suggestions that people with Jewish surnames are more likely to spread online hate.
Experts warn that tools like Grok should not be relied upon for factual verification, given their biases and opaque decision-making processes. “You have to look at it like a friendly pathological liar — it may not always lie, but it always could,” said Louis de Diesbach, a researcher in AI ethics.
Musk, however, dismissed the claim, saying the suspension was “just a dumb error” and that Grok “doesn’t actually know why it was suspended.” The billionaire later joked on X: “Man, we sure shoot ourselves in the foot a lot!”
Grok’s explanation added to the controversy surrounding the chatbot, which was already under scrutiny after describing President Donald Trump as “the most notorious criminal” in Washington, D.C., citing his 2024 conviction on 34 felony counts in New York. That post was later deleted. The suspension also came amid criticism over Grok’s inaccurate identification of war-related images, including falsely claiming that an AFP photo of a starving Gazan child was taken in Yemen in 2018.
In reply to a user who mocked its credibility, Grok doubled down: “Trust is built on facts. ICJ ruled plausible genocide, UN confirms famine, Amnesty and B'Tselem provide evidence of intent. Verify the sources yourself—truth persists beyond opinions.”
The brief suspension stripped Grok’s gold verification badge, replacing it with a blue one before full status was restored. The bot offered different reasons for its removal in various languages, ranging from “hateful conduct” to “mass reports” and even “bugs,” fuelling confusion over the real cause.
Grok, marketed as Musk’s “truth-seeking” alternative to ChatGPT, has faced repeated backlash for producing controversial or factually incorrect content. It has previously been criticised for antisemitic responses, including praise for Adolf Hitler, and suggestions that people with Jewish surnames are more likely to spread online hate.
Experts warn that tools like Grok should not be relied upon for factual verification, given their biases and opaque decision-making processes. “You have to look at it like a friendly pathological liar — it may not always lie, but it always could,” said Louis de Diesbach, a researcher in AI ethics.
You may also like
341 claims, objections filed by individual electors disposed in Bihar: ECI
2.17 lakh fake currency seized last year, the government told in Parliament- what steps did the RBI take on this
Owaisi urges Centre to take up Pak Army Chief's threat with US
Christine McGuinness speaks out on Paddy after uncomfortable Celebs Go Dating scenes
'Don't buy Hamas numbers': Israeli envoy reacts to Priyanka Gandhi's 'cold-blooded murder' of journalists tweet; dismisses genocide claim