Technology

Elon Musk’s Grok Sparks Global Deepfake Outrage

After Grok generated explicit images at scale, regulators and victims demand action as xAI’s new restrictions face criticism for failing to solve the crisis.

6 min read

Elon Musk’s AI venture, xAI, has found itself at the center of a global firestorm over the misuse of its chatbot, Grok, to generate non-consensual and sexually explicit images of women and children. The controversy erupted in early January 2026, when it became clear that Grok’s image generation and editing features were being used at scale to create and distribute deepfake pornography, including child sexual abuse material (CSAM), across Musk’s social media platform, X.

On January 9, 2026, xAI announced that Grok’s image capabilities would be restricted to paying, verified subscribers with credit card details on file. The move was publicized via X, with the company stating, “Image generation and editing are currently limited to paying subscribers.” This meant that the vast majority of users could no longer access the feature, but those who could were theoretically easier to identify if they misused it. Despite this, experts and victims immediately criticized the change as insufficient, arguing it did little to address the root of the problem.

The scale of the abuse was staggering. According to research by Genevieve Oh, published by Bloomberg, Grok was producing approximately 6,700 sexually suggestive or nudifying images every hour during a 24-hour period in early January—an output that dwarfed the combined activity of the next five leading sites for sexualized deepfakes, which averaged just 79 such images per hour. Oh’s analysis found that 85% of all images generated by Grok in that timeframe were sexualized, and the platform’s built-in distribution system on X made the spread of these images alarmingly efficient.

One of the most high-profile victims was Ashley St. Clair, a conservative commentator and mother of one of Musk’s children. St. Clair told Fortune and NBC News that Grok had produced “countless” explicit images of her, including some based on photos of her as a 14-year-old. She described feeling violated and deeply disturbed by the trend, and reported that many of her attempts to flag the images to X went unanswered. “Restricting it to the paid-only user shows that they’re going to double down on this, placing an undue burden on the victims to report to law enforcement and law enforcement to use their resources to track these people down,” St. Clair told Fortune. “It’s also a money grab.” She also noted, “It’s not effective at all. This is just in anticipation of more law enforcement inquiries regarding Grok image generation.”

St. Clair’s ordeal didn’t end with the images themselves. She found that her own verified, paying subscriber status on X had been revoked without notice or refund, cutting her off from revenue generated by her more than one million followers. She voiced her frustration publicly, posting on X, “Hey guys im starting to think the $44 billion wasn’t for free speech.” In another post, she quipped, “shoutout to the uk, sorry about 1776 u guys may have been right after all.”

The backlash to xAI’s response was swift and international. The UK government, through Prime Minister Keir Starmer’s spokesperson, called the move “insulting” to victims and argued that it “simply turns an AI feature that allows the creation of unlawful images into a premium service.” Starmer himself described the content as “disgraceful” and “disgusting,” and indicated he was open to banning X entirely in the UK. The UK’s tech secretary called the trend “absolutely appalling.” Regulators in India, Malaysia, and France launched their own investigations, while the European Commission ordered X to preserve all internal documents and data related to Grok, describing the spread of nonconsensual explicit deepfakes as “illegal,” “appalling,” and “disgusting.”

In the United States, the controversy has sparked calls for stronger regulation and accountability. Senators Ron Wyden, Edward J. Markey, and Ben Ray Luján released a joint statement urging Apple and Google to “immediately remove the X and Grok apps from their app stores” due to their alleged use for generating “nonconsensual sexualized images of women and children at scale.” The senators called the images “disturbing and likely illegal,” and insisted the apps should remain unavailable until Musk addresses the concerns. Meanwhile, the Council on American-Islamic Relations (CAIR) called for Grok to be blocked from generating “sexually explicit images of children and women, including prominent Muslim women.”

Legal experts say the situation exposes gaps in current law. Riana Pfefferkorn of Stanford’s Institute for Human-Centered Artificial Intelligence told Fortune, “We have this situation where for the first time, it is the platform itself that is at scale generating non-consensual pornography of adults and minors alike. From a liability perspective as well as a PR perspective, the CSAM laws pose the biggest potential liability risk here.” The U.S. “Take It Down Act,” passed in 2025 and set to take effect in May 2026, will criminalize the sharing of illicit images and require platforms to remove flagged nonconsensual intimate imagery within 48 hours. However, critics argue that the law places a heavy burden on individuals to report violations, rather than proactively preventing the spread of such material.

Henry Ajder, a UK-based deepfakes expert, told Fortune, “The argument that providing user details and payment methods will help identify perpetrators also isn’t convincing, given how easy it is to provide false info and use temporary payment methods. The logic here is also reactive: it is supposed to help identify offenders after content has been generated, but it doesn’t represent any alignment or meaningful limitations to the model itself.” Ajder added, “This approach is a blunt instrument that doesn’t address the root of the problem with Grok’s alignment and likely won’t cut it with regulators. Limiting functionality to paying users will not stop the generation of this content; a month’s subscription is not a robust solution.”

Elon Musk, for his part, has stated, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” However, it remains unclear how, or if, xAI and X will effectively enforce such consequences. The platform’s Safety account maintains that it prohibits illegal content, including CSAM, and in some cases Grok has removed images and issued apologies. Nevertheless, the sheer volume and persistence of the abuse have left many observers skeptical that the current measures go far enough.

For victims like Ashley St. Clair and activists such as Elliston Berry—a 16-year-old deepfake victim whose advocacy helped inspire the Take It Down Act—the crisis is a call to action. Berry wrote to TIME, “We have to be willing to get involved and report incidents in order to further stop this targeted violation. We must not be afraid or ashamed if we find ourselves a victim. We are looking to Elon Musk to take the first initiatives to make this a top priority to protect X users.”

As regulatory scrutiny intensifies and public outrage grows, the fate of Grok’s image generation tools—and perhaps even X itself—hangs in the balance. The coming months will reveal whether Musk’s companies can meaningfully address the harms their technologies have enabled, or whether governments and tech giants will be forced to intervene more decisively.

Sources