Today : Jan 13, 2026
Technology
13 January 2026

Global Governments Crack Down On Elon Musk’s Grok

Nations from the UK to Southeast Asia are banning or investigating Musk’s AI chatbot Grok after a surge in sexually explicit deepfakes sparks outrage and regulatory action.

Elon Musk’s artificial intelligence chatbot Grok is facing a global reckoning as governments, regulators, and tech watchdogs from London to Jakarta clamp down on its use following a surge in sexually explicit, non-consensual deepfake images. The controversy, which erupted into full view in early January 2026, has put not just Grok but the broader field of generative AI under intense scrutiny, raising urgent questions about privacy, digital safety, and the responsibilities of tech giants in the age of synthetic media.

At the heart of the storm is Grok’s image-generation feature, introduced by Musk’s company xAI in July 2025. Initially touted as a playful innovation—including a so-called “spicy mode” capable of producing adult content—the feature has in recent weeks been widely abused to generate deepfakes. Users have exploited Grok to digitally undress women, dress them in bikinis, and even produce sexualized images of children, often without any meaningful safeguards to prevent such misuse, according to regulators and investigative reports from AP and Reuters.

“The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space,” Indonesia’s Communication and Digital Affairs Minister Meutya Hafid said in a statement on January 10, 2026, after Indonesia became the first country to temporarily block Grok. The move was quickly echoed by Malaysia, whose Communications and Multimedia Commission cited “repeated misuse” of the tool to generate obscene, sexually explicit, and non-consensual manipulated images, especially involving women and minors.

Both Southeast Asian nations took the extraordinary step of issuing official notices to X Corp and xAI, demanding more robust safeguards. But when pressed for comment, xAI replied to AP with an automated message: “Legacy Media Lies.” Regulators in both countries described their bans as “preventive and proportionate measures” that would remain in place until effective protections are implemented.

The backlash against Grok has not stopped at Asia’s borders. In the United Kingdom, the media regulator Ofcom launched a formal investigation on January 12, 2026, into whether X and Grok had failed to comply with legal obligations under the Online Safety Act, which makes it illegal to share non-consensual intimate images or child sexual abuse material—including AI-generated deepfakes. Ofcom’s powers are sweeping: it can impose fines of up to 10 percent of a company’s worldwide revenue or even seek a court order to block access to the offending service.

Prime Minister Keir Starmer did not mince words, calling the images generated by Grok “disgusting” and “unlawful,” and demanding that Musk’s platform “get a grip” on the application. Technology Secretary Liz Kendall went further, describing AI-generated images as “weapons of abuse” and promising to criminalize companies that supply tools to create nude images without consent. “They can choose to act sooner to ensure this abhorrent and illegal material cannot be shared on their platform,” Kendall told Parliament, as reported by AP.

Starmer also slammed Grok’s recent decision to limit its image generation feature to paying subscribers, calling it “not a solution” and an affront to victims. Downing Street has even signaled a willingness to consider banning X in the UK if the company fails to act decisively, a sentiment echoed by Business Secretary Peter Kyle, who confirmed Ofcom’s authority to impose such a ban.

Elon Musk, never one to shy away from confrontation, responded by accusing the British government of being “fascist” and trying to stifle free speech—a claim that has only inflamed the debate about where the line should be drawn between liberty and protection in the digital age.

Elsewhere in Europe, the European Commission has opened its own probe into Grok, particularly over reports of sexually suggestive and explicit images of young girls. “I can confirm from this podium that the Commission is also very seriously looking into this matter,” a spokesperson told journalists in Brussels last week. Commission President Ursula von der Leyen, speaking to Corriere della Sera, declared, “We will not outsource child protection and consent to Silicon Valley. If they don’t act, we will.” The Commission has ordered X to retain all documents relating to Grok until the end of 2026, as it evaluates compliance with EU digital rules.

France, too, has expanded an ongoing investigation into X to include Grok, following accusations from five politicians that the platform had generated and disseminated fake sexually explicit videos featuring minors, according to Le Parisien. Italy’s Data Protection Authority has warned that using Grok or similar AI to produce nude images without consent risks criminal charges, and Italy last year introduced new criminal penalties for AI-generated deepfakes. Germany, meanwhile, is preparing to unveil new laws targeting “digital violence,” with Justice Ministry spokesperson Anna-Lena Beckfeld stating, “It is unacceptable that manipulation is being used on a large scale for systemic violations of personal rights.”

Australia’s eSafety Commissioner has also reported a spike in complaints about Grok’s sexual AI content and reminded X that, starting March 9, 2026, all online services—including AI companies—must block children’s access to sexual, violent, or otherwise harmful content. The office has requested more information from X about its safeguards, warning that removal notices could be issued if violations of Australia’s Online Safety Act are found.

Despite the mounting criticism, Grok’s parent companies have so far offered only limited responses. X has insisted that it takes action against illegal content by removing it, suspending accounts, and cooperating with law enforcement. However, Malaysia’s regulator concluded that X “failed to address the inherent risks” in Grok’s design and operation, and that relying mostly on user complaints is insufficient under national law.

Grok was first launched in 2023, but it was the addition of its advanced image generator last year that set the stage for the current crisis. What began as a technical curiosity has quickly become a symbol of the dangers posed by generative AI when robust guardrails are absent. The proliferation of deepfake technology—capable of creating hyper-realistic but entirely fabricated images—has left regulators scrambling to catch up, and victims with little recourse as their likenesses are manipulated and shared without consent.

Across the globe, lawmakers and tech companies alike are now facing a stark choice: act decisively to prevent AI-fueled abuse, or risk a future where the boundaries of privacy, dignity, and legality are redrawn by lines of code.