Today : Nov 12, 2025
World News
12 November 2025

UK Unveils Landmark Law To Stop AI Child Abuse

A surge in AI-generated child sexual abuse images prompts the UK government to empower watchdogs and tech firms with new testing powers under a proposed Crime and Policing Bill amendment.

On November 12, 2025, the United Kingdom took a decisive step in its ongoing battle against the proliferation of artificial intelligence-generated child sexual abuse material (CSAM), unveiling a suite of legislative measures that aim to strike at the very source of this disturbing trend. The proposed amendment to the Crime and Policing Bill, announced by the government this week, is being hailed as a world-leading effort to empower tech companies, AI developers, and child protection organizations to proactively test and safeguard AI models before they are unleashed to the public.

Under the new rules, organizations like the Internet Watch Foundation (IWF)—a key player in the fight against online child abuse imagery—will be granted the legal authority to scrutinize AI systems for their potential to generate illegal content, including extreme pornography and non-consensual intimate images. This marks a significant shift from the current regime, where criminal liability for creating or possessing such material has hindered developers and watchdogs from carrying out essential safety testing on AI models prior to their release. As a result, removal efforts have historically been reactive, only coming into play after harmful content appears online.

Technology Secretary Liz Kendall captured the government's sense of urgency, stating, "We will not allow technological advancement to outpace our ability to keep children safe. These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk. By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought." (BBC)

The scale of the challenge is sobering. According to fresh data released by the IWF, reports of AI-generated child sexual abuse material have more than doubled in the past year alone. Between January and October 2025, the IWF actioned 426 reports of such material, up from 199 in the same period of 2024. While the overall number of AI-generated images and videos decreased slightly—from 6,459 in 2024 to 5,560 in 2025—the severity of the material has intensified dramatically. Category A content, which includes images depicting penetrative sexual activity, sexual activity with animals, or sadism, rose from 2,621 to 3,086 items. This category now represents 56% of all illegal material, compared to 41% the previous year (IWF data).

Perhaps even more disturbing is the demographic breakdown of the victims. Girls remain overwhelmingly targeted, accounting for 94% of illegal AI images in 2025. The IWF also noted a surge in depictions of infants: images of children aged 0–2 years skyrocketed from just 5 in 2024 to 92 in 2025. While there has been a slight increase in boys appearing in such images, the overwhelming majority of victims remain girls (IWF).

Kerry Smith, Chief Executive of the IWF, welcomed the new measures, emphasizing the need for safety to be built into technology from the outset. "AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material," Smith said. "Today’s announcement could be a vital step to make sure AI products are safe before they are released." (Sky News, BBC)

The legislative changes will not only allow for the proactive testing of AI models but will also bring together a group of experts in AI and child safety. This panel will help design robust safeguards, ensure sensitive data is protected, prevent the risk of illegal content leaking, and support the wellbeing of researchers involved in the testing process. The government’s stated aim is to make the UK the safest place in the world to be online—especially for children—and these new rules are seen as a critical tool in achieving that goal (UK Government press release).

Safeguarding Minister Jess Phillips echoed this commitment, noting, "We must make sure children are kept safe online and that our laws keep up with the latest threats. This new measure will mean legitimate AI tools cannot be manipulated into creating vile material and more children will be protected from predators as a result." (BBC)

Despite broad support for the government’s initiative, some campaigners and child protection groups argue that the measures must go further. The National Society for the Prevention of Cruelty to Children (NSPCC) welcomed the new provisions for encouraging accountability and scrutiny. However, Rani Govender, the NSPCC’s policy manager for child safety online, insisted that voluntary compliance is not enough. "But to make a real difference for children, this cannot be optional. Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design," Govender said (Sky News, BBC).

The legislative amendment is part of a broader UK effort to stay ahead of the curve as AI technology evolves at breakneck speed. Earlier in 2025, the Home Office announced that the UK would become the first country in the world to make it illegal to possess, create, or distribute AI tools specifically designed to generate CSAM, with offenders facing up to five years in prison (BBC).

Child safety advocates have long warned that AI tools, trained on vast troves of online content, can be exploited by offenders to create highly realistic abuse imagery of children and non-consenting adults. The sophistication of these tools not only enables the rapid creation of new material but also complicates law enforcement efforts, as it becomes increasingly difficult to distinguish between real and synthetic images. Some reports suggest that a growing demand for such imagery exists on the dark web, with evidence that even children themselves may be involved in creating or sharing this content (BBC, IWF).

By granting designated organizations the power to scrutinize AI models before they reach the public, the UK hopes to limit the production of this heinous material at its source. The proactive approach is designed not only to protect children from exploitation and re-victimization but also to reinforce public trust in AI innovation—demonstrating that technological progress and child safety can coexist.

As the UK moves forward with this ambitious plan, all eyes will be on the effectiveness of these measures and whether they can serve as a blueprint for other countries grappling with the dark side of AI. The stakes could hardly be higher, and for the countless children at risk, the hope is that these new laws will mark a turning point in the fight against online abuse.