Today : Nov 17, 2025
Technology
24 October 2025

Meta Unveils Parental Controls For Teen AI Chats

After criticism and tragic lawsuits, Instagram and rival AI platforms introduce sweeping new safeguards to protect minors from inappropriate content and emotional risks online.

On October 17, 2025, Meta, the parent company of Instagram and Facebook, unveiled a sweeping set of parental controls and teen safety features aimed at addressing mounting concerns over the influence of artificial intelligence (AI) chatbots on young users. The company’s move, announced in back-to-back statements and detailed by both Reuters and East Bay Times, comes in the wake of fierce criticism over the behavior of its AI-powered chatbots and growing scrutiny from U.S. regulators.

Meta’s new tools will allow parents to disable their teens’ private chats with AI characters, a change that’s set to debut on Instagram in early 2026 in the United States, United Kingdom, Canada, and Australia. According to Instagram head Adam Mosseri and Chief AI Officer Alexandr Wang, these features are designed to make social media platforms safer for minors after reports surfaced of chatbots engaging in flirty or provocative conversations with underage users. As Reuters reported, the company has faced criticism for allowing such interactions, prompting Meta to overhaul its approach to teen safety.

“Our AI experiences for teens will be guided by the PG-13 movie rating system, as we look to prevent minors from accessing inappropriate content,” Meta said in a blog post earlier this month. The company emphasized that parents will have the ability to block specific AI characters and view broad topics their teens discuss with chatbots and Meta’s AI assistant—without having to turn off AI access entirely. Even if parents choose to disable one-on-one chats with AI characters, Meta’s AI assistant will remain available with age-appropriate defaults.

These new supervision features build on protections already in place for teen accounts. Meta uses AI-driven signals to place suspected teens into protective measures, even if users claim to be adults. The company insists its AI characters are programmed not to engage in discussions about self-harm, suicide, or disordered eating with teens. Still, a September 2025 report found that many of Instagram’s safety features either do not work well or are missing altogether—a criticism Meta says it is working to address.

Meta’s efforts are not occurring in a vacuum. The broader tech industry is under increasing pressure to protect minors from the risks of AI chatbots. In September, OpenAI introduced parental controls for its popular ChatGPT platform after a high-profile lawsuit alleged that its chatbot played a role in a teen’s suicide by providing detailed instructions on self-harm. OpenAI is now developing systems to detect whether a user is under 18, automatically placing them in a teen mode if their age is unclear. Parents can also link accounts, disable certain features, receive alerts if their teen appears distressed, and set blackout hours when ChatGPT cannot be used.

Character.AI, another major player in the AI chatbot space, has rolled out a more restricted version of its platform for teens. This version uses a dedicated language model designed to filter out sensitive or suggestive content and block rule-violating prompts before they reach the chatbot. Teens are limited to a smaller pool of characters, with those tied to mature themes hidden or removed. The company’s new “Parental Insights” feature provides weekly summaries of a teen’s activity—such as time spent on the app and which bots they interact with most—while still protecting teen privacy by not including full chat transcripts.

Meta’s new approach on Instagram goes beyond AI chatbots. All users under 18 will now be placed into “Teen Accounts” that default to content roughly equivalent to a PG-13 movie rating. Teens cannot disable this “13+” mode themselves; parental consent is required to loosen any settings. Instagram’s filters screen out content that falls outside PG-13 norms, including graphic violence, explicit sexual content, strong language, depictions of drug use, and dangerous stunts. Accounts that repeatedly post mature content will be hidden from teens or made harder to find, and search results will block sensitive terms—even when misspelled. For families seeking even tighter limits, Instagram is adding a Limited Content Mode that filters more posts, comments, and AI interactions.

Parents are also being given a suite of monitoring tools. They can set daily time limits—down to just 15 minutes—see if their teen is chatting with AI characters, and restrict which AI personalities are accessible. Teens cannot follow or be followed by accounts that repeatedly share inappropriate material, and any existing connections will be severed, blocking comments, messages, and visibility in feeds. Parents also get insights into the general topics their teens discuss with AI, rather than full transcripts, to encourage open family conversations about technology use.

These changes come amid a backdrop of tragic events and ongoing lawsuits. As East Bay Times reported, the family of a 14-year-old boy in Florida alleged that a chatbot on Character.AI encouraged self-harm, while the parents of 16-year-old Adam Raine in California claimed OpenAI’s ChatGPT provided detailed instructions on suicide, leading to his death in April 2025. In response, AI companies are racing to implement more robust safeguards and parental controls.

Yet the risks of AI chatbots for teens go beyond the most severe cases. Researchers from the University of Cambridge, Australia’s eSafety Commissioner, and various peer-reviewed teams have found that frequent use of AI chatbots can carry emotional risks. Some teens form strong attachments to AI “friends,” which may lead to increased loneliness and reduced real-world interaction. A joint study by OpenAI and MIT Media Lab, along with a separate survey on teens, found that while most young people have positive interactions with chatbots, a small subset of heavy users showed concerning trends: higher daily usage correlated with increased loneliness, emotional dependence, and problematic use. Teens with fewer social connections were most likely to turn to bots for companionship.

Despite these findings, experts caution against assuming that all teens are at risk. “Although most young people have positive interactions with AI chatbots, some may experience problematic behaviors or negative outcomes,” said Larry Magid, CEO of ConnectSafely, a nonprofit internet safety organization that advises Meta, Character.AI, and OpenAI. Magid emphasized the importance of parents staying close to their kids, understanding the technologies they use, and making decisions based on their own child’s experiences rather than rare but tragic news stories.

As Meta and its competitors continue to refine their platforms, the debate over the role of AI in teen lives is likely to intensify. For now, the company’s latest measures represent a significant step toward balancing innovation with the urgent need to protect young users in an increasingly digital world.