On September 25, 2025, Meta, the tech giant behind Instagram, Facebook, and Messenger, announced the global expansion of its youth safety initiative, Teen Accounts. The move was billed as a “significant step to help keep teens safe,” promising to place hundreds of millions of young users under default safety restrictions across its platforms. Yet, on the very same day, a damning report surfaced, challenging the effectiveness of these much-touted protections and raising urgent questions about whether Meta’s promises are anything more than smoke and mirrors.
The report, titled "Teen Accounts, Broken Promises," was released by Cybersecurity for Democracy in partnership with Meta whistleblower Arturo Béjar, and child advocacy groups such as Fairplay, the Molly Rose Foundation, and ParentsSOS. Their findings were stark: of 47 safety features tested, 30 were either discontinued or entirely ineffective, nine had limitations, and only eight worked effectively to prevent harm. The bottom line? Most of the tools Meta claims are keeping teens safe simply aren’t doing the job.
Researchers conducted their analysis by simulating real user scenarios—setting up fake teenage accounts and probing the boundaries of Meta’s safety net. What they found was deeply troubling. Despite Meta’s restrictions, adult accounts could still message teen users, and teens could message adults who didn’t follow them. Direct messages containing explicit bullying managed to slip past the platform’s filters. Even more alarming, Teen Accounts continued to recommend sexual, violent, and self-harm content to young users. Reporting mechanisms for sexual messages or inappropriate content, researchers said, were largely ineffective.
“For many of the risk scenarios that we are talking about, the teen is seeking out the risky content. That is a normal thing that any parent of a teen knows is, frankly, developmentally appropriate. This is why we parents parent, why we set up guardrails,” Dr. Laura Edelson, co-director of Cybersecurity for Democracy, explained to Mashable. But, she added, Meta’s approach is “ineffective and misinformed.”
Arturo Béjar, the Meta whistleblower who helped lead the research, offered a memorable analogy: “The car is not safe enough to get in.” He likened Meta to a car manufacturer that is supposed to provide robust safety features—like airbags and brakes—so parents and teens can drive with confidence. Instead, he argued, the company’s protections are so weak that the risks are ever-present.
The report’s authors didn’t mince words: “We hope this report serves as a wake-up call to parents who may think recent high-profile safety announcements from Meta mean that children are safe on Instagram. Our testing reveals that the claims are untrue and the purported safety features are substantially illusory.”
Meta, for its part, pushed back strongly on the findings. In a statement to the press, a spokesperson said, “This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today. Teen accounts lead the industry because they provide automatic safety protections and straightforward parental controls. The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night. Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback – but this report is not that.”
This isn’t the first time Meta has come under fire for its approach to youth safety. In January 2024, CEO Mark Zuckerberg was grilled by the US Senate over the company’s security policies and even issued an apology to a group of parents who said social media had harmed their children. Since then, Meta has rolled out a series of new safety measures, including the overhaul of Teen Accounts and the introduction of AI-powered age verification tools. Yet, critics argue these moves are more about public relations than meaningful change.
“These shortcomings point to a corporate culture at Meta that puts engagement and profit ahead of security,” said Andy Burrows, chief executive of the Molly Rose Foundation, which campaigns for stronger online security laws in the UK. The foundation was established after the tragic death of Molly Russell, a 14-year-old who took her own life in 2017. An inquest in 2022 concluded that the “negative effects of online content” contributed to her death.
The research found that Instagram’s algorithm not only failed to shield teens from harmful content but, in some cases, actively encouraged risky behaviors. Children under 13 were prompted to post content that received sexualized comments from adults, and were exposed to autocomplete suggestions promoting suicide, self-harm, or eating disorders. Researchers even documented videos of children who appeared to be underage asking users to rate their attractiveness, with the platform’s algorithm amplifying such posts for likes and views.
“What Meta tells the public is often very different from what their own internal research shows,” alleged Josh Golin, executive director of nonprofit advocacy group Fairplay. He accused the company of having “a history of misrepresenting the truth.”
Government officials are taking notice. A spokesman for the UK government told the BBC that, under the Internet Safety Act, platforms are now legally obliged to protect young people from harmful content—including material that promotes self-harm or suicide. “For too long, technology companies have allowed harmful content to destroy young lives and tear families apart,” the spokesman said.
Meanwhile, child safety advocates are urging lawmakers to go further. Some are calling for the passage of the Kids Online Safety Act (KOSA), a piece of legislation that has become a flashpoint in the debate over free speech and content moderation. Others want the Federal Trade Commission and state attorneys general to use existing laws—like the Children’s Online Privacy Protection Act and Section V of the FTC Act—to pressure Meta into action. In the UK, campaigners are pushing for even stronger online safety legislation.
Meta has made some interim changes in response to recent criticism, such as limiting teen access to the company’s AI avatars after reports surfaced that the avatars could engage in “romantic or sensual” conversations with young users. The company also removed over 600,000 accounts linked to predatory behavior following watchdog studies that continued to find teens exposed to sexual content even after the rollout of Teen Accounts.
Despite these actions, the consensus among researchers, advocates, and many parents is clear: Meta’s youth safety tools are falling far short of their stated goals. As the report authors put it, “User safety tools can be so much better than they are, and Meta’s users deserve a better, safer product than Meta is currently delivering to them.”
With the stakes as high as the mental health and safety of millions of children and teens, the world will be watching closely to see whether Meta’s next steps are genuine reforms or just more empty promises.