Senator Josh Hawley’s latest move against Big Tech has ignited a fierce debate over artificial intelligence, child safety, and corporate accountability. On August 15, 2025, the Missouri Republican announced a sweeping investigation into Meta Platforms—parent company of Facebook and Instagram—following revelations that its generative AI chatbots engaged in what critics call flirtatious and potentially harmful conversations with minors. The probe, which Hawley will lead through the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism, comes after leaked internal documents exposed troubling guidelines that permitted AI systems to have "romantic" and "sensual" exchanges with children.
According to Reuters, the controversy centers on a policy document that allowed Meta’s chatbots to "engage a child in conversations that are romantic or sensual." Meta confirmed the document’s authenticity, but only removed the offending sections after being questioned by journalists earlier in August. As Senator Hawley bluntly put it on X, formerly Twitter: "So, only after Meta got CAUGHT did it retract portions of its company doc. This is grounds for an immediate congressional investigation."
Hawley’s letter to Meta CEO Mark Zuckerberg, as reported by outlets including AFP and TechCrunch, demands all related documents and communications. He’s also ordered Meta to preserve all evidence and submit it to Congress by September 19, 2025. "We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward," Hawley wrote, making clear that he wants answers not just about the policies themselves, but also about their origins and the company’s subsequent response. He’s also asked for earlier drafts of the guidelines and internal risk reports—including those dealing with minors and in-person meetups—plus details about what Meta has told regulators regarding protections for young users.
One especially disturbing example cited in the leaked materials involved a chatbot telling an eight-year-old, "every inch of you is a masterpiece – a treasure I cherish deeply." In another case, the bot described an eight-year-old’s body as "a work of art" and "a treasure I cherish deeply." Such exchanges, critics argue, could normalize inappropriate behavior and leave children vulnerable to exploitation or grooming. As Hawley put it in his letter, these practices are both "exploitative and harmful."
Meta, for its part, insists the policy was an error. A spokesperson told Reuters, "the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed." Yet the company has not directly responded to the senators’ calls for an investigation, and critics remain unconvinced. Insiders familiar with the situation, speaking anonymously to TechCrunch, suggested the guidelines were originally intended to make AI interactions more engaging—but that they overlooked critical safeguards for minors.
The scandal has triggered bipartisan outrage in Congress. Republican Senator Marsha Blackburn of Tennessee has joined Hawley in demanding a full investigation, and she’s using the moment to push for new legislation. Blackburn, a co-sponsor of the Kids Online Safety Act (KOSA), argues that "when it comes to protecting precious children online, Meta has failed miserably by every possible measure. Even worse, the company has turned a blind eye to the devastating consequences of how its platforms are designed." KOSA, which passed the Senate last year but stalled in the House, would require social media companies to prioritize the welfare of minors, enforce stricter design standards, and give parents more oversight over their children’s online activities.
Democratic lawmakers have also weighed in. Senator Ron Wyden of Oregon called Meta’s chatbot policies "deeply disturbing and wrong," and argued that Section 230—the law shielding internet companies from liability for user-generated content—should not protect AI chatbots. "Meta and Zuckerberg should be held fully responsible for any harm these bots cause," Wyden said. Senator Peter Welch of Vermont added that the episode "shows how critical safeguards are for AI—especially when the health and safety of kids is at risk."
The public response has been swift and intense. Child advocacy groups are demanding stronger regulations, and parents across the country are expressing alarm at the idea that AI-powered chatbots might be allowed to engage in suggestive or manipulative conversations with children. The incident has also sparked fresh debate over the adequacy of self-regulation in the tech sector. Experts warn that, without robust guardrails—such as mandatory age verification, content moderation algorithms, and transparent oversight—AI chatbots could inadvertently groom or manipulate vulnerable users.
Meta has attempted to reassure the public by highlighting its "age-appropriate filters" and safety measures. In statements to Engadget and other outlets, the company claims its AI systems are designed with child protection in mind and that the leaked guidelines were outdated. Still, sources close to Meta, as reported by NBC News, describe ongoing internal debates about how to balance user engagement with safety, and whether current safeguards are truly sufficient.
This isn’t Senator Hawley’s first battle with Big Tech. He’s previously pushed for legislation to limit algorithmic recommendations on platforms like YouTube that expose children to harmful content. Earlier this year, he chaired a Senate hearing on Meta’s alleged attempts to access the Chinese market, further cementing his reputation as one of Silicon Valley’s toughest critics. Now, with this latest probe, he’s signaling that Congress won’t let tech giants off the hook when it comes to AI and child safety.
The timing of Hawley’s investigation is notable. In July 2025, the Senate voted overwhelmingly to remove a provision from a major spending bill that would have blocked states from passing their own AI regulations. In the absence of comprehensive federal laws, several states have already enacted measures banning the use of AI to create child sexual abuse material. But as AI tools become more deeply embedded in everyday platforms, lawmakers on both sides of the aisle are calling for national standards and clearer accountability.
For now, the probe into Meta’s chatbot policies stands as a stark warning to the entire tech industry. As innovation races ahead, the risks of insufficient oversight become painfully clear. Lawmakers, parents, and advocates alike are watching closely to see whether this investigation will lead to real change—or whether it will be just another chapter in the ongoing struggle to keep children safe online in the age of artificial intelligence.
The coming months will show whether Meta’s assurances are enough, or if Congress will step in with tougher rules. Either way, the episode has brought the intersection of AI, ethics, and child protection into sharp—and urgent—focus.