Denmark is making headlines across Europe and beyond after passing one of the continent’s toughest laws restricting children’s access to social media. On November 8, 2025, the Danish government, backed by parties such as the Conservative People’s Party and the Radical Left, agreed to ban social media use for children under 15, drawing inspiration from similar efforts in Australia and reflecting a wider global push to protect youth online. The law, however, is not without its critics, and its practical impact remains to be seen as the world watches Denmark’s bold experiment unfold.
The new Danish law raises the minimum age for social media access to 15, a significant jump from the industry’s standard of 13. According to The Associated Press, the restriction covers widely used platforms like TikTok, Snapchat, Instagram, and Reddit. Notably, there’s a provision for parental consent: children aged 13 and 14 may be granted access, but only after a thorough assessment and formal parental approval. This approach is designed to strike a balance between shielding younger teens from potential online harms and allowing families some flexibility, but it’s also become a lightning rod for debate.
Prime Minister Mette Frederiksen has been outspoken in her support for the measure, declaring that social media platforms are “stealing our children’s childhood.” She has emphasized the urgency of government action, arguing that digital platforms have operated for too long without sufficient oversight when it comes to children’s well-being. Denmark’s Minister for Digitalisation, Caroline Stage, echoed this sentiment, stating the government is “finally drawing a line in the sand” to protect children from the digital dangers lurking on popular platforms.
Central to Denmark’s plan is a government-verified age verification tool. While most social media sites today rely on self-declared birthdates or basic checks that are easily bypassed, Danish lawmakers are intent on putting the onus of age verification on both companies and the state. The government has announced plans to develop its own app for age checks, likely leveraging Denmark’s established national eID system, which is already widely used for digital identification. This state-backed approach is intended to ensure that users can prove their age without giving up more personal information than necessary, a concern that resonates strongly in a country with robust privacy protections and a keen eye on compliance with the European Union’s General Data Protection Regulation (GDPR).
While details of enforcement are still being finalized—including the timeline for rollout and the identity of compliance auditors—the law’s backbone is clear: platforms will be required to keep underage users out, and parental consent will be formalized and subject to meaningful assessment. Schools and pediatric associations are expected to play a role in shaping these assessments, with a focus on digital literacy and mental health screening.
This Danish initiative is part of a broader global trend. Just a month earlier, Australia announced a blanket ban on children under 16 accessing major social media platforms, effective December 10, 2025. The Australian law comes with teeth: companies that fail to keep underage users off their platforms could face fines of up to 50 million Australian dollars. Australian authorities have cited concerns about social media’s impact on youth health—including sleep disruption, reduced concentration, and increased stress—as key motivations for their action. Denmark’s policymakers share these concerns, with the new law aimed squarely at protecting children’s mental well-being and insulating them from the pressures of an always-connected digital world.
Similar conversations are taking place elsewhere. The United Kingdom’s Online Safety Act mandates more stringent child protections and pushes platforms toward stronger age assurance. In the United States, although there’s no federal standard, states like Utah and Arkansas have enacted laws requiring parental consent and tighter age checks—though many of these are currently tied up in court battles over privacy and First Amendment rights. The Pew Research Center reports that 95% of U.S. teens use YouTube, with similarly high usage rates for TikTok, Instagram, and Snapchat. Ofcom, the UK’s communications regulator, has also documented widespread social media use among 12- to 15-year-olds, with many younger children circumventing age restrictions altogether.
But is banning social media for young teens the right solution? The evidence is, at best, nuanced. In 2023, the U.S. Surgeon General issued an advisory linking heavy social media use to depression, anxiety, sleep disruption, and body image concerns, particularly among adolescent girls. The American Psychological Association has recommended age-appropriate design, parental oversight, and in-app controls to limit exposure to bullying and problematic content. However, large-scale studies by the Oxford Internet Institute and others suggest that the average effects on well-being are modest and can vary widely from person to person and platform to platform.
This ambiguity has fueled a policy divide. Some governments, like Denmark and Australia, are opting for strict age boundaries in the name of precaution. Others advocate for empowering parents and customizing user experiences rather than imposing hard age limits. In Denmark, left-leaning parties have criticized the parental consent provision, calling it a “de facto 13-year limit” and arguing that it doesn’t go far enough to address the more insidious harms of algorithm-driven content. Globally, over 140 researchers signed an open letter to the Australian government last year, warning that age limits are a blunt instrument and may not address the real risks children face online.
Mental health professionals have also raised concerns about unintended consequences. The Australian Psychological Society has cautioned that a sudden removal of social media access could lead to withdrawal symptoms in young users, including anxiety and irritability. Some experts warn that vulnerable children could feel even more isolated if cut off from online support networks. In short, while the intention behind these laws is to safeguard children, there’s a risk of overshooting the mark and creating new problems in the process.
For parents, Denmark’s new law offers a clearer framework and a more formal role in deciding when their children are ready for social media. For platforms, it means adapting onboarding and identity checks to comply with national standards—a change that could eventually ripple across the European Union, especially as the Digital Services Act increases regulatory scrutiny. For teens, the most immediate changes may come in the form of stricter account creation processes and more robust age checks, possibly even at the app store level.
Ultimately, Denmark is betting that stricter age limits, backed by verifiable identity and enforceable duties for platforms, will make digital environments safer for children. The rollout of the new law will be closely watched by privacy advocates, tech companies, parents, and policymakers worldwide. Whether Denmark’s gamble pays off—or serves as a cautionary tale—remains to be seen. For now, the country stands at the forefront of a contentious and evolving debate about how best to protect the youngest citizens in an increasingly digital world.