On December 10, 2025, Australia will roll out one of the world’s most sweeping online reforms: a law banning anyone under 16 from holding accounts on major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube. It’s a move that’s sending ripples far beyond Australia’s shores, with parents, tech giants, and lawmakers from North America to Europe watching closely—and, in some cases, nervously.
The law, passed after a whirlwind legislative process in late 2024, requires social media companies to take “reasonable steps” to ensure users are at least 16 years old. Platforms that fail to comply could face fines of up to 49.5 million Australian dollars (about US$33 million), according to the BBC. The onus is on the tech companies, not parents or children, to enforce the rule. But, as the law’s launch date arrives, a fundamental question hangs in the air: will it work?
Supporters hope the answer is yes. For years, parents and advocacy groups have sounded the alarm about the risks children face online—cyberbullying, exposure to violence and pornography, relentless algorithmic pressure, and the collection of personal data before kids can even understand the stakes. “We have zero faith the tech companies will do anything other than protect their profits,” Dany Elachi, a father of five and anti-smartphone campaigner, told the BBC. He and other parents have pushed for a tougher minimum age, arguing that current content filters and age checks are far too weak.
Australia’s Prime Minister Anthony Albanese echoed these concerns when he announced the legislation, saying, “This one is for the mums and dads... They, like me, are worried sick about the safety of our kids online.” The law, he promised, would help free children from addictive social media algorithms and reduce exposure to harmful content, cyberbullying, and online child exploitation.
The idea of a social media ban for under-16s is gaining traction elsewhere. Some European governments and advocacy groups are calling for similar restrictions. In Canada, there’s rising interest, with a recent proposal to limit social media access for teens under 16 citing youth safety and online risk concerns. According to a recent Canadian survey, up to two-thirds of Canadians—including many under 30—support restricting social media use for children under 16, as reported by Maclean’s.
For many parents, the appeal is clear: a blanket rule removes daily battles over screen time and simplifies digital parenting. “No more arguing, no sneaking, no endless debates about when it’s ‘safe enough,’” wrote Maclean’s. In theory, the law could help establish a new social norm, making it easier for families to say “no” without feeling like outliers.
But the reality on the ground is far more complicated. Just ask 13-year-old Isobel, who told the BBC she managed to bypass Snapchat’s age verification in under five minutes by using a photo of her mother. “I got a photo of my mum, and I stuck it in front of the camera and it just let me through. It said thanks for verifying your age,” Isobel said. Her mother, Mel, who had hoped the ban would help protect her daughter, could only laugh. “This is exactly what I thought was going to happen,” Mel admitted.
Isobel’s story isn’t unique. Tips for circumventing age checks are already spreading online, from signing up with a parent’s email to using VPNs that disguise a user’s location. A University of Melbourne experiment found that a $22 Halloween mask could defeat facial recognition technology. Polling conducted for the Australian government in May 2025 indicated that a third of parents planned to help their children skirt the ban. “It’s a constant running battle to ensure that the mitigations are improving, literally on a daily basis,” said Luc Delany, an executive for K-ID, which performs age assessments for Snapchat.
Enforcement, then, is a major challenge. A government-funded, industry-run trial of age assurance methods found that while ID checks are most accurate, they require users to hand over sensitive documents—something most Australians don’t trust social media firms with. Age inference and facial assessment technologies, already used by Meta and Snapchat, lack the precision needed for teenagers. The intended target—those just under or over 16—are the hardest to distinguish. “When you go to a bottle shop and they look you up and down and go, ‘Mmm not really sure,’ they ask you for some ID… It’s the same principle,” explained Tony Allen, head of the UK-based Age Check Certification Scheme, who led the trial. But critics, including former advisory board members, have raised concerns about privacy and potential bias.
There are also broader worries that the ban could backfire. Critics warn that pushing kids off major platforms may simply drive them to less regulated corners of the internet, where risks can be even greater. “You’re not stopping behaviour, you’re just moving that behaviour to other platforms,” said Tim Levy, head of online safety company Qoria. Some point to gaming chatrooms or anonymous video chat sites—areas flagged by law enforcement as hotbeds for predatory behaviour and radicalisation. Children can still browse apps like TikTok and YouTube without accounts, which may expose them to unfiltered content and targeted ads.
Legal challenges are already underway. Two teenagers have filed a case in Australia’s highest court, calling the law unconstitutional and Orwellian. Alphabet, the parent company of YouTube and Google, is reportedly considering its own legal action. Human rights groups and legal experts have voiced concerns about free expression and privacy. “There’s nothing magical about the age of 16,” said former children’s commissioner Anne Hollonds, who has long lobbied for stronger online protections but called the new law a “blunt” tool. More than 140 experts signed an open letter warning that the policy could do more harm than good, especially for vulnerable children who find support and community online.
Even some of the law’s architects admit it’s a work in progress. “It’s going to look a bit untidy on the way through. Big reforms always do,” Communications Minister Anika Wells told the BBC. She described the law as a “treatment plan” rather than a cure, with further regulatory steps—including a digital duty of care for tech firms—on the horizon. “This is, at the end of the day, work to try and save a generation. It’s worth doing.”
For now, the world is watching. The law could mark a turning point in how societies balance digital freedoms with child safety. Whether it succeeds or stumbles, it’s forcing a long-overdue conversation about what children need—and what they risk—when they go online.