Today : Oct 05, 2025
Technology
04 October 2025

TikTok Faces Scrutiny After Child Safety Failures Revealed

A watchdog investigation finds TikTok’s safety features for children repeatedly failed, exposing young users to explicit content despite new UK legal protections.

When researchers at Global Witness set out to test TikTok’s safety features for children, they didn’t expect to be shocked—let alone within minutes of opening a new account. Yet that’s exactly what happened in a new investigation that’s raising alarm bells across the United Kingdom and beyond. The findings, released in October 2025, reveal that TikTok’s much-touted protections for young users are failing in ways that could have serious consequences for millions of children.

In a series of controlled experiments, Global Witness researchers created multiple fake TikTok accounts, each registered as a 13-year-old—the platform’s minimum age. The accounts were set up with “Restricted Mode” enabled, a feature TikTok claims will limit exposure to mature or sexually suggestive content. But the results were anything but reassuring.

Within just a couple of clicks, these test accounts were bombarded with sexualised search terms such as “very rude babes,” according to Global Witness and as reported by The Morning News. In some cases, the situation escalated rapidly: one account stumbled upon explicit pornographic videos—including depictions of penetrative sex—after only two clicks. Even more troubling, some of the content appeared to show people who looked under the age of 16, prompting researchers to alert the UK’s Internet Watch Foundation, the national authority on online child sexual abuse material.

“TikTok isn’t just failing to prevent children from accessing inappropriate content—it’s suggesting it to them as soon as they create an account,” said Ava Lee from Global Witness, as quoted by BBC. The group, which typically investigates the impact of big tech on human rights and democracy, said the findings came as a “huge shock.”

The investigation unfolded in two phases. The first round of tests took place in spring 2025, before the UK’s new Online Safety Act came into force. Researchers created three accounts, all using factory-reset smartphones and false dates of birth. No age verification checks were triggered during registration or while browsing. The second round occurred after July 25, 2025, when the Act’s Children’s Codes were enacted, bringing four additional accounts into the experiment.

The results were consistent and disturbing. All seven accounts encountered pornographic material within a few clicks. The explicit content ranged from women exposing their underwear in public places to full-scale pornographic films. Some videos were cleverly embedded within otherwise innocent content, successfully evading TikTok’s moderation systems. In several cases, the search suggestions themselves were explicitly sexual or used disguised terms like “corn” (a common workaround to avoid detection for “porn”).

According to BBC, the suggested search terms appeared in the “you may like” section, even before the test users had entered any searches themselves. Some recommendations carried misogynistic undertones, while others seemed to reference young children. Ordinary TikTok users have also reported similar experiences, sharing screenshots of sexualised search suggestions and asking, “what’s wrong with this app?”

Global Witness reported all the problematic content to TikTok and the Internet Watch Foundation. The group argued that TikTok’s failures constitute a clear breach of the UK’s Online Safety Act, which requires tech platforms to shield minors from harmful material. The Act, which took effect for under-18 protections in July 2025, imposes strict legal duties on platforms: they must use “highly effective age assurance” to prevent children from seeing pornography and must adjust their algorithms to block content that encourages self-harm, suicide, or eating disorders.

Ofcom, the UK’s communications regulator, has stated that personalised recommendations are one of the main ways children encounter harmful content online. Under the new law, platforms classified as medium or high risk are required to configure their recommendation algorithms so that such material is blocked from young users’ feeds. Ofcom has promised to review the Global Witness findings, and the outcome could set a precedent for how social media companies manage algorithmic risks under the Online Safety Act.

TikTok’s response has been swift but defensive. The company said it removed over 90 pieces of offending content and problematic search suggestions in multiple languages after being notified by Global Witness. “As soon as we were made aware, we took immediate action,” a TikTok spokesperson told reporters. The platform also claims to have more than 50 features designed to keep teens safe, including removing nine out of 10 videos that violate its guidelines before they are viewed. “We are fully committed to providing safe and age-appropriate experiences,” TikTok said in a statement, adding that it has launched improvements to its search suggestion feature and is reviewing its youth safety strategies.

Despite these assurances, the recurrence of explicit search terms and content in the second round of Global Witness’s research—conducted after the Children’s Codes came into force—suggests that the problem is far from solved. The group’s researchers found that, even with all safety settings enabled, TikTok’s algorithm continued to push sexualised content toward accounts registered as children. “Everyone agrees that we should keep children safe online… Now it’s time for regulators to step in,” Ava Lee urged, emphasizing the need for stricter oversight.

One of the most concerning aspects of the investigation was the apparent ease with which the platform’s protections could be bypassed. None of the test accounts were asked for additional information to confirm their age, and the “Restricted Mode” failed to filter out even the most graphic material. This gap in age assurance and algorithmic control raises questions about TikTok’s compliance with both the letter and the spirit of the UK’s Online Safety Act.

As TikTok cements its place as a primary search tool and entertainment platform for young people, the pressure on regulators is mounting. The platform’s popularity among children and teens means that even small failures in safety measures can have widespread and damaging consequences. The fact that ordinary users—not just researchers—are noticing and complaining about sexualised search suggestions adds weight to calls for urgent reform.

Global Witness’s findings have reignited the debate over how tech companies should balance freedom of expression, algorithmic innovation, and child safety. While TikTok maintains that its guidelines prohibit explicit content and that it enforces a minimum age of 13, the investigation demonstrates that these measures are not always effective in practice. The watchdog group’s call for Ofcom to investigate TikTok’s compliance could be a turning point, not just for the platform but for the entire social media industry.

As the UK regulator begins its review, all eyes are on TikTok and its ability—or inability—to protect its youngest users from harm in a digital world where algorithms wield enormous influence.