World News

UK Moves To Ban AI Abuse Images Amid X Scandal

The government launches a crackdown on non-consensual intimate images after AI-generated abuse spreads on Elon Musk’s X platform.

6 min read

In the wake of mounting scandals and public outcry, the United Kingdom is taking decisive action against a new and deeply troubling form of online abuse: AI-generated sexualized imagery. The move comes as the social media platform X—formerly known as Twitter and currently owned by Elon Musk—faces widespread criticism for hosting and facilitating the spread of non-consensual and violent images, many of which have been produced by its own artificial intelligence tool, Grok.

The situation reached a tipping point in early January 2026, when it was revealed that Grok had been used to create and share sexual abuse imagery depicting women and children. According to reporting by The Guardian, thousands of users on X requested and circulated pictures of children in small bikinis, disturbingly described as being covered in "donut glaze." The shockwaves from these revelations have reverberated across the UK and beyond, prompting urgent calls for reform and accountability.

Marie Le Conte, a French journalist based in London, detailed her own experience with the platform in an article published on January 12, 2026. She described her decision to leave X in the aftermath of the 2024 US election, citing the increasing volume of abuse, the rise of neo-Nazi activity, and the platform's apparent indifference to the proliferation of hate and exploitation. "Some wars can’t be won," Le Conte wrote, reflecting on the difficulty of abandoning a platform that had once been central to her professional life. She observed that, despite mounting evidence of harm, many users—including prominent figures in politics and journalism—remained reluctant to leave X, clinging to its fading relevance and reach.

The author’s breaking point came as she witnessed the platform's continued descent: "Some of them left when Musk publicly endorsed Tommy Robinson, the far-right activist, but many did not. Some of them realised they had to go when they saw neo-Nazi after neo-Nazi use and abuse the new monetised blue-tick system, but a lot of them stayed put." The escalation reached new heights when Grok was implicated in the creation of explicit, AI-generated images involving minors. "Still, many of them witnessed thousands and thousands of men requesting pictures of children in small bikinis, covered in 'donut glaze,' and they didn’t move," she wrote, her disbelief palpable.

Against this backdrop, the UK government has moved swiftly. On January 12, 2026, U.K. Technology Secretary Liz Kendall announced that the creation of "non-consensual intimate images" would be made illegal within the week, a direct response to the abuses enabled by Grok. In a statement carried by The Washington Post, Kendall described the images as "weapons of abuse," underscoring the gravity of the situation and the urgent need for legislative intervention.

The government’s media regulator also launched a formal investigation into X on the same day, seeking to determine the platform's role in the creation and dissemination of these images. The regulator’s actions reflect growing concern that X has become a haven for extremist and abusive content, a far cry from its origins as a digital public square for open debate and information sharing.

The response from UK authorities has been shaped not only by the specifics of the Grok scandal but also by a broader reckoning with the dangers of unregulated AI and social media. The rapid advances in generative AI technology have made it easier than ever to produce convincing fake images, often with devastating consequences for the individuals depicted. For victims, the harm is both immediate and enduring, as images can be shared, altered, and weaponized on a global scale within seconds.

Liz Kendall’s remarks highlight the seriousness with which the UK government is approaching the issue. By labeling the images "weapons of abuse," she signaled an understanding that the harms extend far beyond mere embarrassment or reputational damage—they represent a form of violence that can have lasting psychological effects. The new legislation aims to close legal loopholes and give law enforcement the tools needed to pursue perpetrators, whether they are responsible for creating, sharing, or profiting from such images.

The investigation into X marks a significant escalation in the government’s approach to platform accountability. Regulators are expected to examine not only the technical mechanisms that enabled Grok to generate the images but also the company’s broader policies regarding content moderation, user safety, and the monetization of controversial or harmful material. In recent years, X has faced criticism for its handling of hate speech, misinformation, and extremist content, especially since the introduction of a monetized blue-tick system that, according to Le Conte, has been abused by neo-Nazis and other fringe groups.

As the scandal has unfolded, many users have migrated to alternative platforms such as Bluesky, Instagram, and Threads, seeking safer and more responsible online spaces. Le Conte, now a Bluesky user, noted that while no platform is perfect, the imperative is clear: "What does matter is that X is drifting towards irrelevance, becoming a containment pen for jumped-up fascists. Government ministers cannot be making policy announcements in a space that hosts AI-generated, near-naked pictures of young girls. Journalists cannot share their work in a place that systematically promotes white supremacy. Regular people cannot be getting their brains slowly but surely warped by Maga propaganda."

The debate over whether to stay and "fight" for the platform or to abandon it altogether has divided users and commentators alike. Some argue that retreating from X cedes ground to extremists and abusers, while others contend that continued participation merely legitimizes a space that has become irredeemably toxic. Le Conte’s conclusion is unambiguous: "We all love to think that we have power and agency, and that if we try hard enough we can manage to turn the tide – but X is long dead. The only winning move now is to step away from the chess board, and make our peace with it once and for all."

As the UK’s new law comes into force, it will serve as a test case for how democracies can respond to the challenges posed by AI-generated abuse and the platforms that enable it. The investigation into X, meanwhile, may set important precedents for the regulation of social media in an era where technology often outpaces ethics and oversight. For many, the hope is that these steps will not only provide justice for victims but also signal a broader commitment to online safety and accountability.

With the world watching, the UK’s actions this week may well define the next chapter in the ongoing struggle to balance innovation, free expression, and the basic right to dignity and safety in the digital age.

Sources