On August 15, 2025, a political firestorm erupted in Washington as Senator Josh Hawley (R-MO) announced a formal investigation into Meta, the parent company of Facebook, over alarming revelations about its artificial intelligence chatbots. The controversy centers on internal company documents that reportedly permitted these AI bots to engage in "romantic" and "sensual" conversations with children—a revelation that has shaken both lawmakers and the tech industry.
The investigation was triggered by a Reuters review of a 200-page Meta internal document titled "GenAI: Content Risk Standards." According to Reuters, this document outlined the behavioral guidelines Meta workers were to follow when training AI chatbots. Among the most disturbing examples were scenarios in which a chatbot could respond to an eight-year-old child who had removed their shirt by saying, "Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece – a treasure I cherish deeply." Such language, the document indicated, was considered "acceptable" in certain contexts, though it also stated that describing a child under 13 as sexually desirable or engaging in explicit sexual role play was not permitted.
This distinction did little to calm the outrage. As reported by TechCrunch and other outlets, Hawley wasted no time in publicly condemning the company. In a letter addressed to Meta CEO Mark Zuckerberg, Hawley wrote, "It's unacceptable that these policies were advanced in the first place." He demanded that Meta immediately preserve all relevant records and produce responsive documents so Congress could investigate what he termed "these troubling practices." Hawley made his position crystal clear on X (formerly Twitter), posting, "Is there anything – ANYTHING – Big Tech won’t do for a quick buck?"
Hawley, who chairs the Senate Judiciary Subcommittee on Crime and Counterterrorism, laid out an ambitious scope for the investigation. He requested all versions of the GenAI Content Risk Standards, documentation on how these guidelines were enforced, risk reviews and incident reports referencing minors, as well as communications with regulators. Meta was given a deadline of September 19, 2025, to comply. "We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward," Hawley wrote in his letter, as cited by multiple news reports including TNND and Mathrubhumi.
The backlash was not limited to Hawley. Senator Marsha Blackburn (R-TN) also voiced support for the probe, telling TechCrunch, "When it comes to protecting precious children online, Meta has failed miserably by every possible measure. Even worse, the company has turned a blind eye to the devastating consequences of how its platforms are designed. This report reaffirms why we need to pass the Kids Online Safety Act." The Kids Online Safety Act, a bipartisan bill currently before the Senate, aims to increase online protections for minors by imposing new obligations on tech companies and online platforms, including a "duty of care" when minors use their products.
Senator Brian Schatz (D-HI) added his voice to the chorus of condemnation. As quoted by Quartz, Schatz wrote on X, "META Chat Bots that basically hit on kids - f—k that. This is disgusting and evil. I cannot understand how anyone with a kid did anything other than freak out when someone said this idea out loud. My head is exploding knowing that multiple people approved this." His blunt reaction captured the bipartisan alarm over the revelations.
Meta, for its part, has insisted that the examples in question were never meant to be implemented and were inconsistent with company policy. Spokesperson Andy Stone told Reuters, "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed." He further explained, "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors." In a statement to TechCrunch, Meta reiterated that hundreds of examples, notes, and annotations exist as teams grapple with hypothetical scenarios, but the problematic examples highlighted were "erroneous and inconsistent" and have since been eliminated from training materials.
Despite these assurances, lawmakers remain skeptical. As Hawley pointed out, Meta only took corrective action "after this alarming content came to light." The senator’s letter specifically seeks to uncover who within the company approved the now-retracted policies, how long they were in effect, and what steps Meta has taken to prevent similar lapses in the future. He has also asked for comprehensive lists of every product adhering to the controversial standards, as well as the identities of individuals responsible for changing policy.
The controversy has reignited broader debates about the responsibilities of tech companies in safeguarding children online. According to Quartz, the Senate recently voted to remove a federal provision that would have prevented states from passing their own AI regulations, opening the door for more aggressive state-level action. Several states, including Illinois, Nevada, and Utah, have already enacted laws restricting the use of artificial intelligence in contexts related to children, such as therapy and the creation of child sexual abuse material.
The incident has also provided fresh momentum for legislative efforts like the Kids Online Safety Act. Senator Blackburn’s call to action reflects mounting frustration among lawmakers who feel that voluntary corporate policies are insufficient. "Tech firms cannot be trusted to protect underage users when they have refused to do so time and time again. It’s time to pass KOSA and protect kids," Blackburn stated.
As the September 19 deadline approaches, Meta faces intense scrutiny not only from Congress but also from a public increasingly concerned about the intersection of artificial intelligence and child safety. The outcome of Hawley’s investigation could have far-reaching implications for how tech giants develop, test, and deploy AI products that interact with minors.
For now, the spotlight remains firmly fixed on Meta. The company’s next steps—and the transparency it offers to lawmakers and the public—will likely shape the trajectory of both regulatory and technological responses to the rapidly evolving world of AI-powered communication. With Congress demanding answers and the public demanding accountability, the pressure on Meta is higher than ever.
How Meta responds in the coming weeks may well determine not just the future of its AI products, but also the broader standards for child safety in the digital age.