Elon Musk’s artificial intelligence chatbot, Grok, has found itself at the center of a global firestorm after its image generation and editing features were used to create thousands of sexualized deepfake images of women and children. In response to mounting outrage and regulatory threats, Grok’s parent platform X (formerly Twitter) announced on January 9, 2026, that these AI-powered image tools would now be limited to paying subscribers only—a move that has only intensified criticism from governments, regulators, and advocacy groups worldwide.
For weeks, Grok’s capabilities to alter photos—removing clothing or placing real people in sexualized or violent scenarios—had been exploited by users across the globe. According to The Guardian, research showed Grok had been used to create pornographic videos of women without their consent, as well as images depicting women being shot and killed. The deluge of such content, especially after a late December update to Grok’s image creation feature, saw thousands of nonconsensual sexualized images flood the platform within just two weeks.
The backlash was swift and severe. UK Prime Minister Keir Starmer called the content “disgraceful” and “disgusting,” demanding immediate removal of the AI-generated images and warning that X faced the threat of regulatory action and even a possible ban in the United Kingdom. Starmer said, “It’s unlawful. We’re not going to tolerate it. I’ve asked for all options to be on the table. It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.” His office later described X’s decision to place the image generation feature behind a paywall as “insulting” to victims, stating, “That simply turns an AI feature that allows the creation of unlawful images into a premium service... It’s insulting the victims of misogyny and sexual violence.” (People Daily Digital)
Across the Channel, the European Commission echoed these concerns. EU digital affairs spokesman Thomas Regnier told reporters, “This doesn’t change our fundamental issue, paid subscription or non-paid subscription. We don’t want to see such images. It’s as simple as that.” The Commission has ordered X to preserve all internal documents and data related to Grok through the end of 2026, signaling an ongoing investigation into the platform’s handling of AI-generated abuse. “What we’re asking platforms to do is to make sure that their design, that their systems, do not allow the generation of such illegal content,” Regnier emphasized. (The Guardian)
Other governments—including those of France, Malaysia, and India—have also publicly condemned X’s handling of the scandal. In Malaysia and India, officials have demanded explanations from both X and xAI, Musk’s artificial intelligence company, regarding what safeguards are in place to prevent further abuse. Meanwhile, in the United States, Senators Ron Wyden, Ben Ray Luján, and Ed Markey sent a letter to Apple and Google urging them to remove both X and Grok from their app stores, arguing, “Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices. Indeed, not taking action would undermine your claims in public and in court that your app stores offer a safer user experience than letting users download apps directly to their phones.”
Despite the outcry, Musk and his companies have defended their approach. In a post on X, Grok announced, “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.” The restriction means that free users can no longer access the controversial AI image tools, while paying subscribers must provide credit card details and personal information—ostensibly allowing X to identify and pursue those who misuse the feature. Musk himself warned, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” The official X Safety account added, “We deal with illegal material by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”
But critics say these measures miss the mark. As the European Commission and UK officials have pointed out, simply restricting access to those willing to pay does not address the root problem: the technology’s very ability to generate nonconsensual, abusive, and illegal imagery. “Restricting image generation to paid subscribers does not change our fundamental concern,” a European Commission spokesperson reiterated. “Whether paid or unpaid, we do not want to see such images.”
Further complicating matters, investigations by The Guardian and AI Forensics, a Paris-based nonprofit, revealed that a separate Grok Imagine app still allowed non-paying users to create sexualized images of women and children. AI Forensics identified about 800 pornographic and sexually violent images and videos created via the Grok Imagine app, highlighting the limitations of X’s new restrictions and raising questions about broader enforcement across Musk’s AI ecosystem.
The controversy has placed X and Grok under unprecedented scrutiny. Regulators have threatened fines, sanctions, and even bans. The UK’s communications regulator, Ofcom, has been empowered by Starmer’s government to take action, while the European Commission’s demand for document preservation signals a potentially lengthy and far-reaching investigation into the platform’s internal decision-making and safety protocols.
At the heart of the debate is a fundamental question: Can AI platforms like Grok be trusted to police themselves, or do they require stricter oversight and technological safeguards to prevent abuse? The current solution—placing the feature behind a paywall—has done little to reassure critics. Many argue that requiring payment and personal data may deter some bad actors but does not eliminate the risk, especially when alternate versions of the app still reportedly allow the creation of harmful content.
For Musk, who has long championed free speech and innovation on his platforms, the Grok scandal represents a major test of his companies’ ability to balance technological advancement with ethical responsibility. While X insists it removes illegal material and cooperates with law enforcement, the scale of the abuse and the persistence of loopholes have left many unconvinced.
As the world watches, the outcome of this controversy could shape the future of AI content moderation, privacy, and accountability—not just for X and Grok, but for the entire industry. Governments and regulators are making it clear: Restricting access to dangerous technology is not enough. The systems themselves must be built to prevent harm, not just react to it after the fact.
With investigations ongoing and public pressure mounting, the debate over AI-generated abuse and platform responsibility is far from settled. What happens next on X and Grok may well set the standard for how societies confront the darker side of artificial intelligence in the years to come.