TikTok, one of the world’s most popular social media platforms, is facing a storm of criticism and regulatory scrutiny after a series of damning reports revealed that its algorithm recommended sexually explicit and even pornographic content to accounts registered as children. The findings, uncovered by the human rights campaign group Global Witness, have sent shockwaves through parents, lawmakers, and digital safety advocates, reigniting debates about the adequacy of tech companies’ measures to protect young users online.
According to Global Witness, the investigation began in late July and early August 2025, when researchers set up seven TikTok accounts in the United Kingdom, each posing as a 13-year-old. The researchers used factory-reset phones with no search history and enabled TikTok’s “Restricted Mode,” a setting the platform claims filters out mature or sexually suggestive material. Despite these precautions, the accounts were not asked to verify their age beyond simply stating a birthdate—a loophole that allowed researchers to easily bypass TikTok’s age restrictions, which are supposed to prevent anyone under 13 from creating an account (as reported by CyberNews and CNN).
The results were nothing short of alarming. Three of the test accounts were immediately prompted with “sexualised searches” upon clicking into the search bar. The suggested terms ranged from “very rude babes” and “unshaven girl” to “TikTok Late Night For Adults.” All seven accounts were able to access pornographic content within just a few clicks, encountering everything from women flashing underwear to explicit pornographic films depicting penetrative sex. Researchers noted that some content attempted to evade TikTok’s protective measures by embedding explicit videos within seemingly innocuous images or videos, making them harder to detect and filter (as detailed by BBC and Daily Times).
Perhaps most troubling, some of the recommended pornographic material appeared to feature minors. Global Witness reported these findings to the UK’s Internet Watch Foundation, the body legally empowered to investigate and act on possible child sexual abuse material. “We can’t be sure of the age of the person in the video but given the seriousness of our concern we reported it,” the group stated in its public release.
The researchers’ shock was palpable. Ava Lee, a spokesperson for Global Witness, said, “TikTok isn’t just failing to prevent children from accessing inappropriate content – it’s suggesting it to them as soon as they create an account.” She added, “Everyone agrees that we should keep children safe online. Now it’s time for regulators to step in.” The group’s findings, Lee emphasized, point not merely to lapses in content moderation but to a deeper issue: the platform’s algorithmic recommendation system, which appears to actively direct underage users toward sexually explicit material.
The timing of the investigation is significant. The UK’s Online Safety Act and its Children’s Codes came into force on July 25, 2025, placing new legal obligations on platforms to protect minors from harmful content. These regulations require “highly effective age verification” and mandate that platforms block content related to pornography, self-harm, suicide, and eating disorders. Global Witness conducted its tests both before and after the act’s enforcement, finding the same disturbing patterns persisted.
When confronted with the findings, TikTok responded by taking down more than 90 pieces of content and removing certain problematic search suggestions in multiple languages. A TikTok spokesperson told CNN, “As soon as we were made aware of these claims, we took immediate action to investigate them, remove content that violated our policies, and launch improvements to our search suggestion feature.” The company also highlighted its existing safety infrastructure, claiming to have “more than 50 features” designed to protect teens and stating that “nine out of 10 violating videos are taken down before being viewed.”
TikTok’s public relations offensive has included stressing its commitment to a “safe and age-appropriate experience” and touting its efforts to delete about 6 million underage accounts every month using a variety of age-verification technologies. The platform also trains moderators to spot signs of underage users and has introduced features such as guided meditation to reduce aimless scrolling and restrictions on late-night notifications for teens. However, for many observers, these assurances ring hollow in light of the latest revelations.
Media-lawyer Mark Stephens, cited by CNN, described the findings as a “clear violation” of the new Online Safety Act. TikTok, for its part, has yet to respond directly to legal concerns raised by Stephens and others. The Act, which applies to any platform with a significant UK user base, is part of a broader push in Britain and internationally to hold tech companies accountable for the online safety of children. Critics of the legislation, however, have warned that age-verification rules could threaten the privacy of all users, not just minors.
The controversy has also drawn attention to the broader landscape of online child protection. Other platforms, including YouTube and Instagram, have recently rolled out new tools to better safeguard young users—YouTube with an AI-based age-rating system, and Instagram with enhanced privacy settings for teen accounts. Yet, as Ava Lee and others argue, voluntary measures by tech firms have proven insufficient. “Now is the time for regulators to step in,” Lee insisted, echoing a sentiment that’s gaining traction among parents, lawmakers, and digital safety campaigners.
Indeed, the public outcry has been swift. Parents and digital safety advocates have voiced deep concern about TikTok’s apparent inability—or unwillingness—to keep children safe from harmful content. Lawmakers are now facing mounting pressure to ensure that the Online Safety Act is enforced robustly and that platforms like TikTok are held to account. The UK regulator Ofcom has been urged by Global Witness to investigate TikTok’s compliance with the law, particularly in light of evidence that algorithmic recommendation, not just inadequate moderation, is at the heart of the problem.
Meanwhile, TikTok maintains that it is “fully committed to providing a safe and age-appropriate experience,” and points to its track record of removing policy-violating content. The company claims that around 30% of content taken down between January and March 2025 was linked to sensitive and adult topics. Despite these numbers, the persistence of the problem highlighted by Global Witness suggests that much work remains to be done.
As the debate continues, the stakes could hardly be higher. With children’s mental health, privacy, and safety on the line, the battle over online protections is far from over. The question now is whether regulatory action will finally force platforms to put children’s safety ahead of engagement metrics—and whether TikTok, under the watchful eye of the public and the law, can rise to the challenge.