The UK has officially enacted its sweeping Online Safety Act, marking this milestone with the release of comprehensive regulations to guide technology companies on how to address harmful online content. With the start of enforcement today, tech firms have until March 16, 2025, to assess and mitigate the risks of illegal activities, particularly those involving children and vulnerable users.
Ofcom, the regulatory body overseeing this initiative, published its first set of codes of practice and guidelines for online service providers. These rules stipulate what must be done to combat illegal content—including terrorism, child sexual abuse, and fraud—making providers responsible for the safety of their users.
The Online Safety Act previously passed through Parliament and became law last October, but today symbolizes its practical enforcement. “This decision on the Illegal Harms Codes and guidance marks a major milestone, with online providers now being legally required to protect their users from illegal harm. Providers now have the duty to assess the risk of illegal harms on their services, with compliance deadlines approaching fast,” Ofcom stated.
Over 100,000 tech firms across the globe may be affected. The law applies not only to large corporations like Meta, Google, and TikTok but also encompasses smaller platforms operating user-generated content services. Specifically, the act indicates enforcement actions could result from violations leading to fines of up to 10% of global annual turnover or as much as £18 million—whichever is greater.
The enforcement pressure follows public campaigns fueled by incidents perceived as linked to irresponsible social media content, particularly last summer’s riots. Ofcom Chief Executive Melanie Dawes emphasized the regulator’s readiness to enforce compliance vigorously. “We’re able to take strong enforcement actions if providers don’t act swiftly to address risks associated with their services,” Dawes noted, underscoring the urgency of the new rules.
Under the new codes, companies must now designate senior executives accountable for user safety, improve their content moderation strategies, and implement protocols facilitating easier reporting of harmful content. Findings from recent public consultations were reflected throughout the guidelines, as Ofcom engaged various stakeholders, including parents, charities, and tech firms, to shape the best practices.
The codes encompass well-defined safety measures, such as requiring platforms to adopt automated detection tools for child sexual abuse materials, employing the technology known as “hash-matching.” This mechanism scans against known illegal content, aiding platforms in swiftly identifying and eliminating harmful posts.
Regarding child protection, Ofcom outlined specific steps to minimize risks, urging platforms to strengthen account privacy settings for underage users and create features to shield them from unsolicited communications.
While the new regulations have been widely welcomed for their potential to improve online safety, some child safety advocates expressed disappointment over the pace at which changes are being implemented. Andy Burrows, chief executive of the Molly Rose Foundation, claimed, “Ofcom’s task was to move fast and fix problems; instead of setting ambitious precedents, these initial measures mean preventable harm can still flourish.”
Similarly, the NSPCC (National Society for the Prevention of Cruelty to Children) voiced concerns, stressing the need for stricter regulations identifying and addressing self-harm content defined under criminal thresholds. Their statements signal citizens’ dissatisfaction with the seeming gradualism of implementing the Online Safety Act’s overarching goals.
Looking forward, Ofcom is also preparing more safety guides focused primarily on issues of child safety, including additional guidelines expected to roll out from January to April next year. These forthcoming additions will address subjects including age verification and broader safety measures for vulnerable online users. Dawes acknowledged the necessity of keeping pace with technological advancements, indicating future updates might respond to shifts like generative AI and other developing digital trends.
Critically, the Online Safety Act establishes potential legal repercussions for senior executives at companies failing to comply, indicating substantial accountability at the highest levels of tech operations. “For the first time, tech firms will be forced to proactively take down illegal content, and if they don’t, they will face enormous fines,” observed Peter Kyle, the UK’s Technology Secretary, showcasing the government’s determination to create safer online ecosystems.
The guidelines released today will undergo considerable scrutiny as tech companies adapt their operations to meet the new compliance standards, reviewing their moderation systems and algorithmic transparency. This monumental shift aims to ride the digital transition smoothly, ensuring platforms are accessible yet safe for all users involved.
The coming months will be pivotal as users, regulators, and technology companies navigate this uncharted territory, embracing the era of regulated online safety.