Elon Musk trumpets 'smartest AI' at Grok 4 launch
Digest more
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
Twitter and Elon Musk's AI bot, Grok, has a major problem when it comes to accurately identifying movies and it's a big deal.
1don MSN
Elon Musk's new AI chatbot, Grok 4, is raising eyebrows for its unusual behavior. The chatbot, released Wednesday by Musk's company xAI, sometimes searches Musk's views online before answering questions.
Elon Musk’s artificial intelligence start-up xAI says it is in the process of removing "inappropriate" posts by Grok on X, the social media site formerly known as Twitter, after users pointed out the chatbot repeated an antisemitic meme and made positive references to Hitler.
AI apologizes for Grok AI's offensive posts, blaming a system glitch for antisemitic and extremist content. Grok faces global backlash
Grok is normally a very smart AI system where you can perform DeepSearch research, create files, projects, and more. On the other hand, AI isn’t perfect and can make mistakes like providing inaccurate information,
The Grok debacle isn't just a tech ethics story. It’s a business, legal, and reputational risk story—one that businesses in nearly every industry shouldn’t ignore.
AI's latest Grok 4 large language model appears to search for owner Elon Musk's opinions before answering sensitive questions about
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation. Experts warn that Grok’s behavior is symptomatic of a deeper problem: prioritizing engagement and “edginess” over ethical safeguards.