On August 22, 2025, TikTok, the social media powerhouse owned by ByteDance, announced a sweeping restructuring of its trust and safety operations in the United Kingdom—a move set to put hundreds of jobs at risk as the company pivots toward artificial intelligence (AI) for content moderation. The decision, which comes just weeks after the Online Safety Act came into force, has sparked a fierce debate about the future of online safety, the role of human workers, and the effectiveness of AI tools in policing the digital public square.
The restructuring will primarily impact TikTok’s trust and safety departments, whose employees are responsible for reviewing and removing content that violates community guidelines—think hate speech, misinformation, and explicit material. According to Sky News, TikTok’s own figures show that more than 85% of videos removed for policy violations are now flagged by automated tools, and 99% of problematic content is proactively taken down before users even report it. Executives argue that these technological advancements have led to a 60% drop in the number of graphic videos viewed by human moderators, reducing their exposure to distressing material.
Yet, for many of the roughly 2,500 TikTok employees in the UK, the announcement landed like a thunderbolt. The Communication Workers Union (CWU), which represents many of these staff, did not mince words. In a statement reported by The Sun, the union warned, “Alongside concerns ranging from workplace stress to a lack of clarity over questions such as pay scales and office attendance policy, workers have also raised concerns over the quality of AI in content moderation, believing such ‘alternatives’ to human work to be too vulnerable and ineffective to maintain TikTok user safety.”
John Chadfield, the CWU’s national officer for tech, was even more direct, telling AFP that, “TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives.” Chadfield went further, alleging that the timing of the layoffs—just a week before staff were due to vote on union recognition—“stinks of union-busting and putting corporate greed over the safety of workers and the public.”
The layoffs are not limited to the UK. TikTok’s global restructuring also affects moderator jobs in South and Southeast Asia, including Malaysia. The company is centralizing its moderation operations in regional hubs like Lisbon and Dublin, and recently closed its Berlin trust and safety team. Under the proposed plan, affected UK employees may see their roles shifted to other European offices or handed off to third-party providers, with only a smaller number of trust and safety positions remaining on British soil.
All of this comes as the UK’s Online Safety Act takes effect, imposing stiff penalties—up to £18 million or 10% of global turnover—on tech companies that fail to prevent the spread of harmful material. The legislation requires platforms to implement robust systems for age verification and content removal, especially when it comes to protecting minors from exposure to suicide, eating disorders, and pornography. TikTok has responded by introducing “age assurance” controls powered by machine learning, though the industry regulator Ofcom has yet to endorse these AI-based systems. Critics point out that the act, while well-intentioned, may not go far enough: downloads of VPN blockers have surged by 1,800%, suggesting that tech-savvy teens can still skirt age checks with relative ease.
TikTok, for its part, insists that the restructuring is about efficiency and technological progress, not cost-cutting at the expense of safety. A spokesperson told Sky News, “We are continuing a reorganisation that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally to ensure that we maximize effectiveness and speed as we evolve this critical function for the company with the benefit of technological advancements.” The company also maintains that its commitment to user privacy and safety remains unchanged, and it plans to open a new office in central London in 2026 as part of ongoing investment in its largest European community—over 30 million Britons use TikTok each month.
Still, the economic context cannot be ignored. TikTok’s revenue across the UK and Europe soared by 38% last year, reaching $6.3 billion, while pre-tax losses shrank from $1.4 billion in 2023 to $485 million in 2024, according to The Sun. The company’s growth comes amid a wave of job cuts sweeping the UK: Walkers’ parent company PepsiCo is consulting on cutting over 500 roles, Santander has axed more than 2,000 jobs, and retailer River Island faces hundreds of layoffs as it shutters dozens of stores. For many, TikTok’s decision feels like part of a broader trend of companies prioritizing automation and restructuring in pursuit of profitability.
Yet, the heart of the controversy remains the question of whether AI can truly replace the nuanced judgment of human moderators. While TikTok points to its partnership with fact-checking organizations like AFP—which is paid to verify potentially false information on the platform—unions and critics argue that algorithms are still too “vulnerable and ineffective” to catch the subtleties of harmful content or to respond to rapidly evolving online threats. The CWU’s Chadfield summed up the skepticism: “Many of our members believe the AI alternatives being used are hastily developed and immature.”
There are, of course, two sides to every coin. Proponents of AI moderation highlight the mental health toll that reviewing disturbing content can take on human staff, and the efficiency gains of automated systems. They argue that, as platforms scale to billions of users, only AI can keep up with the sheer volume of uploads—after all, TikTok claims AI now flags the vast majority of violative content before it ever reaches the public eye. On the other hand, critics worry that over-reliance on technology risks missing context, nuance, and the very real dangers that slip through algorithmic cracks.
As the dust settles, TikTok’s restructuring will serve as a test case for the entire social media industry—a glimpse into a future where algorithms and automation increasingly shape what the world sees online. For the hundreds of workers whose jobs are now in limbo, and for the millions of users who rely on these platforms for connection and information, the stakes could hardly be higher.
One thing is certain: as technology races ahead, the debate over how best to keep the digital world safe—and who should bear that responsibility—shows no sign of slowing down.