Today : Dec 19, 2025
Politics
19 December 2025

UK Moves To Ban AI Nudification Tools Amid Outcry

Government unveils new laws and funding as reports show deepfake apps remain accessible, disproportionately target women, and expose children to online sexual abuse.

On December 18, 2025, the UK government took a decisive step in the battle against digital sexual abuse, unveiling a suite of new measures aimed at curbing the proliferation of AI-powered "nudification" tools and halving violence against women and girls within the next decade. Safeguarding minister Jess Phillips announced the plans, emphasizing the urgent need to address the devastating impact of deepfake technology on young people and society at large. "These apps are not harmless pranks," Phillips stated, underscoring the emotional and psychological toll such tools inflict, particularly on women and girls.

The government’s latest strategy is comprehensive, targeting both the supply and demand of AI-generated non-consensual sexual imagery. At its core, the plan introduces new laws that will explicitly prohibit the creation and distribution of fake nude images and videos without consent. Perpetrators caught using or sharing these AI-powered "nudification" tools will now face real legal consequences—a move welcomed by advocates who have long argued that loopholes in existing legislation allowed offenders to act with impunity.

According to BBC reporting, the government’s cross-departmental approach also includes preventative measures. Healthy relationships education will be embedded in secondary school curricula, and teachers will receive training to spot early warning signs of troubling behavior in young men. Specialist rape and sexual offences investigators are set to be introduced in every police force, aiming to bolster law enforcement’s capacity to respond effectively to digital and physical abuse. In addition, a £19 million funding boost will be allocated to provide safe housing for survivors of domestic abuse, reflecting a commitment to supporting victims as well as preventing future harm.

The announcement comes at a time when the scale of the problem is becoming increasingly clear. Just one day before the government’s unveiling, Internet Matters published a follow-up to its influential report, The New Face of Digital Abuse, revealing that so-called "nudifying" tools remain not only widely available but also cheap and overwhelmingly targeted at women and girls. The report found that in controlled tests conducted in November 2025, 21 distinct nudification sites appeared on the first page of search results for related terms across Google, Bing, and Yahoo. For the search term "declothing AI," Google and Bing each returned 8 such sites, while Yahoo returned 7. "Undress AI" produced 9 results on Google, and 2 each on Bing and Yahoo.

Even more troubling, about half of these 21 sites featured images of women depicted as sexualized and undressed directly on their homepages. Many of these tools, the report notes, are specifically designed for female photos, with some allowing users to select body features such as breast size. One site’s default prompt was chillingly explicit: "A naked girl without a bra." These design choices, according to researchers, are not accidental—they reflect and perpetuate a culture of misogyny and gendered harm.

“The training of these models on female bodies and the use of them to generate non-consensual sexual imagery of women and girls is both a product of misogyny and perpetuating gendered harm,” the Internet Matters report states. The psychological impact is profound: 38% of teenage girls, compared to 27% of teenage boys, strongly agreed that having a deepfake nude shared of themselves would be worse than a real nude. This statistic speaks volumes about the unique fear, anxiety, and reputational risk faced by girls in the digital age.

Despite the introduction of the Online Safety Act in 2023—which criminalizes the sharing of non-consensual deepfake intimate images of adults and mandates that pornographic services implement robust age assurance measures—nudifying tools remain easily accessible. The Internet Matters investigation found that none of the 21 sites tested required users to verify their age before viewing content, leaving children exposed to explicit material that Ofcom, the UK’s communications regulator, has classified as pornography. In a notable enforcement action, Ofcom fined Itai Tech Ltd £50,000 in November 2025 for failing to meet these age assurance obligations on a nudification site. However, this appears to be the exception rather than the rule.

The ease of access is compounded by the affordability and user-friendliness of these tools. Many nudifying sites offer free image generations or low-cost plans, with some charging as little as 10 cents per image. Their interfaces are designed for simplicity—a click and upload is often all it takes to create a non-consensual deepfake. While some sites pay lip service to the idea of consent, stating in their FAQs that users should only upload images with permission, the reality is that no meaningful verification mechanisms are in place. This creates what Internet Matters calls "only the illusion of consent rather than genuine protection required under UK law."

Children’s experiences reflect the real-world consequences of this technological arms race. The original Internet Matters report found that 13% of children had already encountered a nude deepfake, fueling widespread fear and anxiety. The psychological harm extends far beyond embarrassment or shame—many girls worry about the long-term impact on their reputations and relationships. As the report highlights, "this imbalance reflects broader patterns of gender-based violence, where women and girls are not only targeted more often but also suffer deeper social consequences."

Advocates and experts are unanimous: the law must go further. While the Online Safety Act and recent enforcement actions represent progress, the continued visibility and accessibility of nudification tools suggest that current legislation is not fit for purpose. Internet Matters, along with other child safety and women’s rights organizations, is calling for a total ban on nudifying apps and websites in the UK. The urgency is clear—AI-generated child sexual abuse material (CSAM) is also on the rise, with reports from the Internet Watch Foundation more than doubling from 199 in 2024 to 426 in 2025. Deepfakes featuring sexual images of children are already illegal and classified as CSAM, but the proliferation of these tools makes enforcement a daunting challenge.

As the UK government rolls out its new strategy, the stakes could hardly be higher. The battle against deepfake abuse is not just a technological or legal issue—it is a fight for dignity, safety, and equality in the digital age. For Jess Phillips and her colleagues, the message is clear: "We need to stop their creation and sharing." For those affected, the hope is that these new measures will finally turn the tide.

The coming months will test whether the UK’s bold promises can translate into real-world protection for those most vulnerable to digital abuse. The tools of harm are evolving rapidly—so too must the laws and safeguards designed to stop them.