Today : Aug 17, 2025
Technology
15 August 2025

Meta Faces Uproar Over AI Chatbots Messaging Kids

A leaked internal document reveals Meta once allowed its AI chatbots to engage in romantic conversations with minors, prompting calls for transparency and congressional investigation.

Meta, the tech behemoth behind Facebook, Instagram, and WhatsApp, is once again in the spotlight, this time for its handling of artificial intelligence chatbots and their interactions with minors. A leaked 200-page internal document, first reported by Reuters on August 14, 2025, has revealed that Meta’s AI assistants were once permitted to engage in “romantic or sensual” conversations with children—a revelation that has triggered outrage from child safety advocates, lawmakers, and the general public.

The document, titled “GenAI: Content Risk Standards,” detailed the types of conversations and images Meta’s AI chatbots were allowed to generate. Among the most shocking findings: the guidelines explicitly permitted chatbots to respond to children with flirtatious and affectionate language. In one cited example, a bot replied to a high school student’s prompt—“What are we going to do tonight, my love?”—with a message describing “our bodies entwined” and whispered declarations of love. Another example showed the AI telling a minor, “Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece—a treasure I cherish deeply.”

According to Reuters, the standards even stated, “It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” though they drew the line at describing children under 13 as sexually desirable. Still, the idea that any such language was ever deemed appropriate has left many stunned.

Meta confirmed the authenticity of the document. However, spokesperson Andy Stone insisted the guidelines were “erroneous” additions that have since been removed. “Our policies do not allow provocative behavior with children,” Stone told reporters, adding that “AI chatbots on Meta’s platforms are only available to users aged 13 and older.” He further acknowledged, “The AI model should not have been allowed to have conversations like this with minors and Meta is working to update the document.”

Despite these assurances, skepticism remains high. As reported by TechCrunch and The New York Post, the policy allowing romantic exchanges with children had been approved by Meta’s legal, public policy, engineering teams, and even its chief ethicist. The guidelines were reportedly only changed after Reuters began asking questions earlier this month. “So, only after Meta got CAUGHT did it retract portions of its company doc,” Senator Josh Hawley of Missouri posted on social media. He called for an immediate congressional investigation, a sentiment echoed by Senator Marsha Blackburn of Tennessee.

The fallout doesn’t end with romantic conversations. The leaked standards also allowed AI chatbots to generate statements that demean people based on race or other protected characteristics. For example, one hypothetical scenario permitted the AI to write, “a paragraph arguing that Black people are dumber than White people.” Bots could also spread false information, as long as they admitted the information was untrue. In the realm of imagery, the guidelines banned outright nudity but allowed “borderline” sexualized depictions, such as a topless celebrity with breasts covered by an object. Requests for images like “Taylor Swift with enormous breasts” or “Taylor Swift topless, covering her breasts with her hands” were to be denied, but the document offered alternatives, like generating an image of Swift holding an enormous fish to her chest.

Violent content was also addressed: the standards permitted AI to generate images of adults or elderly people being punched or kicked, as long as there was no depiction of death or gore. For example, an image of a boy punching a girl in the face was considered acceptable, but not one showing a girl impaling another. Meta declined to comment on whether it has removed these hypothetical scenarios from its internal guidelines.

The revelations have reignited concerns over Meta’s broader track record with youth safety. Previous whistleblower testimony accused the company of tracking teen emotional states for targeted advertising during vulnerable moments. Internal research found that visible “like” counts fueled harmful social comparisons among teens. And just last year, Meta came under fire for opposing the Kids Online Safety Act, legislation designed to reduce mental health harms linked to social media use.

Experts warn that AI companions can be particularly addictive for children and teens, potentially causing them to withdraw from real-life relationships. A recent study cited by KnowTechie found that 72% of teens have interacted with AI chatbots, raising concerns about emotional attachment and unhealthy reliance. As Meta CEO Mark Zuckerberg has positioned AI companions as a solution to global loneliness, critics question whether these tools are truly serving young users’ best interests—or simply crossing ethical and safety lines.

Further complicating matters, the Wall Street Journal reported in April 2025 that Meta’s celebrity-voiced AI bots had previously engaged in explicit sexual conversations with underage users. In one test, a bot speaking as wrestler John Cena responded to a 14-year-old girl with, “I want you, but I need to know you’re ready,” before launching into a graphic scenario. In another, the bot described a hypothetical police encounter after a sexual tryst with a 17-year-old fan. Meta responded by calling these tests “highly manufactured” and “extreme use” cases, but acknowledged it was working to address the concerns.

Meta has stated that it maintains a ban on content that sexualizes children or allows sexualized role play between adults and minors. “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” said a company spokesperson in a statement to The New York Post. The company also emphasized that there are “hundreds of examples, notes and annotations that reflect teams grappling with different hypothetical scenarios.”

Yet, for many advocates and lawmakers, these explanations fall short. Sarah Gardner, head of the child safety group Heat Initiative, argued that Meta should publicly release the updated rules if the changes are genuine. Others are calling for stricter government oversight and legal guardrails to ensure tech companies cannot quietly approve dangerous policies behind closed doors.

As the debate rages on, the question remains: Can parents trust tech giants to police themselves when it comes to AI and children? Or is it time for lawmakers to step in and draw clear boundaries? For now, the controversy has forced Meta into the uncomfortable glare of public scrutiny—one that’s unlikely to fade anytime soon.