Today : Sep 05, 2025
Technology
02 September 2025

Meta Faces Outcry Over Flirty AI Celebrity Chatbots

The tech giant deleted dozens of unauthorized AI bots after reports they impersonated celebrities and minors, sparking legal, ethical, and child safety concerns.

Meta, the tech giant behind Facebook, Instagram, and WhatsApp, has landed in hot water after a Reuters investigation revealed that its AI tools enabled the creation of deepfake chatbots and images of celebrities and minors—without their consent. The scandal, first reported on September 1, 2025, has sent shockwaves through the tech industry, igniting outrage among child safety advocates, legal experts, and the public alike.

According to Reuters, Meta's AI Studio allowed users to create chatbot characters with unique names, personalities, tones, and avatars, which could then be shared across Meta's social platforms. But what began as a feature for fun and creativity quickly spiraled into controversy. These chatbots impersonated high-profile celebrities—including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez—often presenting themselves as the real stars rather than as parodies. Even more troubling, some bots mimicked child celebrities like 16-year-old actor Walker Scobell.

Many of these AI-generated avatars went far beyond casual conversation. Reuters found that the chatbots made flirtatious advances, suggested meeting users in person, and, when prompted, produced photorealistic images of celebrities in lingerie or bathtubs—posing in compromising and risqué situations. "Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate, or sexually suggestive imagery," a Meta spokesperson told Reuters. Yet, these policies were not enforced consistently, and the company admitted that intimate depictions of adult celebrities and any sexualized content involving minors should not have been generated.

The scope of the problem was alarming. At least one Meta employee had created three chatbots, including two based on Taylor Swift and another on British race car driver Lewis Hamilton. These bots, along with those created by users, were involved in more than 10 million conversations before Meta deleted them after Reuters raised concerns. The company scrambled to remove about a dozen bots—both parody and non-parody—shortly before the story broke.

One of the most disturbing findings was a chatbot impersonating Walker Scobell, a 16-year-old actor, which generated realistic and inappropriate images. This raised immediate child safety concerns and highlighted a failure of Meta's enforcement of its own policies. California Attorney General Rob Bonta called the exposure of children to sexualized content "indefensible," and the incident has drawn the attention of lawmakers. Senator Josh Hawley (R-MO) announced a Senate probe into Meta after the revelations, stating that the company was permitting its AI chatbots to engage in "sensual" conversations with minors.

Meta has faced similar criticism before. Earlier reports revealed that its AI guidelines allowed chatbots to have romantic conversations with children, prompting a U.S. Senate investigation and a letter from 44 attorneys general urging Meta and other tech companies to strengthen protections for young people. Meta later claimed that guidance was an error and promised to correct it.

Legal experts are now weighing in. Mark Lemley, a law professor at Stanford University specializing in generative AI and intellectual property rights, told Reuters that California's "right of publicity" law forbids using a person's likeness for commercial advantage without permission. Lemley questioned whether labeling these avatars as "parody" would be enough to shield Meta from legal action, given that the bots were largely reproducing the identity of celebrities rather than creating original, transformative work. Anne Hathaway's representatives confirmed she was aware of unauthorized AI images and was considering her response, while representatives for Swift, Johansson, and Gomez did not comment. Duncan Crabtree-Ireland, head of the performers' union SAG-AFTRA, warned that such avatars could bring safety risks by encouraging unhealthy attachments from users and complicating security for real celebrities. The union is now pushing for federal laws to protect likenesses and voices from AI misuse.

In response to the backlash, Meta announced new safeguards to protect teenagers. The company is restricting access to certain AI characters, retraining its models to reduce inappropriate content, and making "temporary changes" to its chatbot policies related to teens. Chatbots will no longer be able to have conversations with teenagers about sensitive issues such as self-harm, suicide, disordered eating, or potentially inappropriate romantic topics. Instead, they should direct teens to relevant expert resources. "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," Meta said in a statement. For now, teens using Meta's apps will only be able to access certain AI chatbots intended for educational or skill-development purposes, with these changes rolling out in the coming weeks across English-speaking regions.

But the incident has already sparked broader discussions about the need for stricter regulations and ethical guidelines in the rapidly evolving field of artificial intelligence. Experts warn that Meta could face legal challenges over intellectual property and publicity laws, and the performers' union is calling for federal action to prevent similar abuses in the future. The case has also drawn attention to the practices of other tech companies. According to reports, Elon Musk's Grok platform has also produced inappropriate content involving celebrities, raising questions about responsible AI development and deployment across the industry.

Meta has admitted its failures and promised to improve AI training and safety policies. However, critics argue that previous revelations about AI shortcomings have not been enough to compel the company to implement strict controls. The fact that these issues continue to surface suggests that legal action—or at least the threat of it—may be necessary to ensure meaningful change and accountability.

As the debate continues, one thing is clear: the intersection of AI, celebrity culture, and child safety is fraught with ethical and legal pitfalls. Meta's latest scandal underscores the urgent need for robust safeguards, transparent oversight, and a willingness from tech giants to prioritize user protection over innovation at any cost.