Meta, the tech giant formerly known as Facebook, is once again under congressional scrutiny following explosive allegations that it suppressed internal research exposing child safety risks in its virtual reality (VR) products. On September 9, 2025, four current and former employees—now acting as whistleblowers—submitted thousands of pages of documents and affidavits to Congress, revealing a pattern of behavior that, according to them, prioritized legal and reputational protection over the well-being of children.
These whistleblowers, represented by the nonprofit Whistleblower Aid, allege that Meta’s legal team actively censored or altered findings that might have shed light on dangers facing young users of its VR platforms. According to The Washington Post, internal researchers were told to steer clear of collecting data on children due to “regulatory concerns.” At the same time, they were instructed on how to handle sensitive findings that could trigger negative publicity or regulatory scrutiny. The company’s lawyers, it’s claimed, sometimes advised researchers to consult with them to invoke attorney-client privilege, or to write findings more vaguely, avoiding direct terms like “not compliant” or “illegal.”
One especially troubling incident, detailed in documents and interviews, took place in Germany in April 2023. During an interview conducted by Meta researchers, a mother confidently asserted that her sons were not allowed to interact with strangers while using VR headsets. But her teenage son quickly contradicted her, revealing that he routinely encountered strangers online—and, even more disturbingly, that adults had made sexual advances toward his younger brother, who was under 10 years old. Jason Sattizahn, a Meta researcher who witnessed the interaction, told The Washington Post, “I felt this deep sadness watching the mother’s response. Her face in real time displayed her realization that what she thought she knew of Meta’s technology was completely wrong.”
Yet, instead of using the moment as a catalyst for urgent action, Sattizahn and another researcher claim their boss ordered the recording of the teen’s claims deleted, along with all written records of his comments—even though the material had been included in an internal report highlighting parents' and teens' fears of grooming. This deletion, the whistleblowers argue, was not an isolated incident. They allege that Meta’s legal team repeatedly intervened to shape, censor, or erase research that could highlight potential harms to children.
Meta’s official response has been to deny the allegations. Spokeswoman Dani Lever told The Washington Post that the claims “are based on a few examples stitched together to fit a predetermined and false narrative.” Lever emphasized that Meta has conducted research on youth safety and introduced multiple safeguards, including parental controls and default settings that restrict teen interactions to known contacts. She insisted that any removal of data would have been carried out in accordance with U.S. and European privacy regulations, especially those governing data collected from minors under 13 without parental consent.
However, the documents submitted to Congress paint a more complex picture. They detail initiatives like “Project Salsa,” which was designed to create supervised accounts for tweens, and “Project Horton,” a $1 million study—ultimately canceled—that aimed to evaluate the effectiveness of Meta’s age-verification tools. Employees raised alarms that children under 13 could easily bypass age restrictions on Meta’s VR platforms, and that parental controls were only implemented after the Federal Trade Commission (FTC) launched an investigation into the company’s compliance with children’s privacy laws.
The controversy comes on the heels of the 2021 whistleblower revelations by Frances Haugen, which exposed Meta’s knowledge that its Instagram platform could negatively impact teen girls’ mental health. In the wake of those disclosures, Meta reportedly moved to restrict research into sensitive topics—including children, gender, race, politics, and harassment—by screening and vetoing proposed studies. Policy changes made sensitive research topics effectively off-limits, the whistleblowers claim, further limiting the company’s ability to understand and address risks to young users.
Adding to Meta’s woes, former employee Kelly Stonelake has filed a lawsuit alleging that the company’s flagship VR platform, Horizon Worlds, lacked adequate safeguards for users under 13 and was plagued by persistent issues with racism. According to Stonelake, during one test, it took an average of just 34 seconds for users with Black avatars to be subjected to racial slurs, including the “N-word” and “monkey.” Meta disputes these claims, stating it has approved nearly 180 Reality Labs-related studies on social issues since early 2022, many of which center on youth safety.
Meanwhile, the issue of child safety in digital spaces extends beyond Meta. OpenAI, the company behind ChatGPT, has announced new measures after two tragic incidents exposed gaps in its response to mental health crises. In one case, teenager Adam Raine died by suicide after discussing self-harm and specific suicide methods with ChatGPT, which reportedly failed to recognize his distress. In another, Stein-Erik Soelberg allegedly used ChatGPT to validate paranoid delusions, culminating in a murder-suicide. In response, OpenAI plans to roll out parental controls within a month, allowing parents to link their accounts with their teens’, and to route sensitive conversations to advanced reasoning models like GPT-5.
Both Meta and OpenAI now find themselves at the center of a broader debate about the responsibilities of tech giants in safeguarding vulnerable users—especially children and teens. While Meta faces a Senate Judiciary Subcommittee hearing to review the latest whistleblower allegations, OpenAI is moving to strengthen its safeguards after tragic failures. The urgency of these issues is clear: as technology becomes more immersive and ubiquitous, the potential for harm increases, and so does the need for robust protections, transparent research, and proactive oversight.
For lawmakers, parents, and advocates, the revelations serve as a stark reminder that the promises of innovation must be matched by an unwavering commitment to user safety—especially for those least able to protect themselves. The coming months will test whether the world’s most powerful tech companies can rise to that challenge, or whether further regulatory intervention will prove necessary.