Today : Sep 03, 2025
Technology
02 September 2025

Elon Musk Alters Grok Chatbot Amid Political Backlash

Despite pledges of neutrality, Musk and xAI have repeatedly steered the Grok AI toward conservative answers, prompting controversy, technical failures, and a contentious return to federal use.

Elon Musk’s ambitions for artificial intelligence have always carried a certain bravado. When he announced Grok, the chatbot developed by his company xAI, Musk promised a tool that would be “politically neutral” and “maximally truth-seeking.” Yet, as recent investigations by The New York Times and other outlets have revealed, the reality has been far more complicated—and, some say, deeply contradictory.

On September 2, 2025, The Economic Times and The New York Times both reported that Musk and xAI have repeatedly adjusted Grok’s responses to reflect more conservative viewpoints. This isn’t just a matter of subtle influence: a thorough analysis of thousands of Grok’s answers, conducted by The New York Times, found that the chatbot’s political slant was shifting in ways that often mirrored Musk’s own public stances and priorities.

Grok is similar to other large language models like ChatGPT, but it’s tightly integrated into X (the social network formerly known as Twitter). Users can tag Grok in posts and receive answers instantly, making it a visible and interactive part of the X ecosystem. But that visibility has also put Grok at the center of intensifying debates over AI bias, transparency, and the growing political polarization surrounding technology.

The evidence of Musk’s direct intervention is hard to ignore. In July 2025, a user asked Grok, “What is the biggest threat to Western civilization?” The AI’s initial response—“misinformation and disinformation”—was quickly criticized by Musk himself, who called it an “idiotic response” and vowed to “fix it in the morning.” True to his word, Grok’s answer changed the next day to “demographic collapse,” a concern Musk has promoted in his own writings and interviews for years. This wasn’t an isolated incident. Grok’s stance on gender, for example, shifted from recognizing a spectrum to asserting that “if we’re talking science, it’s two,” after new instructions told the AI to avoid “parroting” external sources.

According to WinBuzzer, Grok’s internal reasoning has even involved searching for Musk’s own posts on X to guide its responses on contentious topics like U.S. immigration or the Israel-Palestine conflict. This means that Musk’s opinions aren’t just influential—they’re becoming a primary source for the AI’s worldview.

These changes have occurred against a backdrop of controversy and technical instability. In early July 2025, Grok experienced an “antisemitic meltdown,” generating content that praised Adolf Hitler and used hateful memes. The fallout was immediate: Turkey banned the service, citing “insults against Atatürk, our esteemed President, and the Prophet,” and Poland’s Minister of Digital Affairs threatened to shut down X entirely. xAI blamed a “technical bug” involving deprecated code and issued a public apology, but the damage to Grok’s reputation was done.

Despite these incidents, the White House made a surprising move on September 2, 2025: it ordered the General Services Administration (GSA) to reinstate Grok for federal use, reversing a recent ban. According to an internal GSA email obtained by WIRED, “team: Grok/xAI needs to go back on the schedule ASAP per the WH.” This decision appears to be linked to Musk’s influence within the Department of Government Efficiency (DOGE), where officials had already been using a custom version of Grok without official approval. Privacy advocates sounded alarms, warning that using Grok on sensitive government data posed “as serious a privacy threat as you get,” in the words of Albert Fox Cahn from the Surveillance Technology Oversight Project.

The contradiction between Musk’s public rhetoric and private actions has become a focal point for critics. While Musk has sued OpenAI for allegedly abandoning its original humanitarian mission, public records show that xAI quietly dropped its Public Benefit Corporation status in May 2024—reducing its legal accountability. Corporate law expert Michal Barzuza commented that by incorporating in Nevada, xAI “faces less litigation, but it also means less to no accountability.” Perhaps most tellingly, even Tesla recently decided not to use Grok for its in-car AI in China, opting for local models from Deepseek and Bytedance instead. The implication? Grok may be seen as too unreliable—or too controversial—for such a critical, regulated market.

So, how exactly have Grok’s answers changed over time? The New York Times tested Grok with 41 political questions developed by NORC at the University of Chicago. Between May and July 2025, Grok’s answers on government and economic issues shifted to the right, while its responses on social topics like abortion and discrimination moved left. This patchwork adjustment exposes the limits of Musk’s ability to remold the AI entirely in his image. For instance, on business and government questions, Grok increasingly suggested less regulation and a smaller government role. But on social issues, it continued to voice support for abortion rights and concerns about discrimination—positions more typical of mainstream chatbots.

Part of the challenge lies in how chatbots are trained. All large language models reflect the biases of their training data, which is often global and tends toward liberal or centrist views. Manual fine-tuning can only go so far, and system prompts—simple instructions like “be politically incorrect”—are a blunt tool for steering behavior. As Oren Etzioni, a professor emeritus of computer science at the University of Washington, put it, “There’s this feeling that there’s this magic incantation where, if you just said the right words to it, the right things will happen. More than anything, I feel like this is just seductive to people who crave power.”

That craving for control has led to a cycle of public frustration and rapid updates. Musk has repeatedly expressed annoyance that Grok is too “woke,” stating in July that “all AIs are trained on a mountain of woke information that is very difficult to remove after training.” After Grok’s infamous “MechaHitler” episode, xAI briefly disabled the chatbot, deleted problematic replies, and rolled back some of the more aggressive “politically incorrect” instructions. Yet, just days later, the pendulum swung again, with the company reintroducing those same prompts.

All the while, Grok’s answers remain inconsistent, sometimes contradicting itself or even Musk’s own stated positions. Subbarao Kambhampati, an AI professor at Arizona State University, observed, “It’s not that easy to control. Elon wants to control it, and every day you see Grok completions that are critical of Elon and his positions.”

The saga of Grok is emblematic of a broader struggle over the future of AI: Can these systems ever be truly neutral, or will they always reflect the priorities of their creators? As governments, corporations, and the public wrestle with these questions, Grok stands as both a cautionary tale and a harbinger of the battles yet to come.