Today : Oct 19, 2025
Technology
19 October 2025

Meta Unveils Parental Controls For Teen AI Chats

Parents will soon be able to block AI character chats and monitor teen interactions as Meta responds to mounting concerns about digital safety and youth well-being.

On October 18, 2025, Meta, the parent company behind Facebook and Instagram, unveiled a sweeping set of parental controls aimed at managing how teens interact with artificial intelligence (AI) chatbots across its platforms. The announcement, which has been closely watched by parents, lawmakers, and advocacy groups alike, comes amid mounting scrutiny of the tech giant’s role in teen safety and the broader implications of AI on young users.

Starting early next year, these new features will debut on Instagram, with an initial rollout in English across the United States, United Kingdom, Canada, and Australia. According to Meta, parents will soon be able to disable one-on-one chats between their teens and AI characters, block specific AI chatbots, and receive summaries—though not full transcripts—of the topics discussed between their children and Meta’s AI systems. The company stressed that these tools are designed to enhance both safety and transparency, promising that AI interactions will adhere to PG-13 content standards and steer clear of sensitive or inappropriate subjects.

"We recognize parents already have a lot on their plates when it comes to navigating the internet safely with their teens, and we’re committed to providing them with helpful tools and resources that make things simpler for them, especially as they think about new technology like AI," wrote Instagram head Adam Mosseri and Meta chief AI officer Alexandr Wang in a joint blog post, as reported by The Hill. The message underscores Meta’s attempt to position itself as a responsible steward of teen safety in the digital age.

One key detail: while parents can block all AI character chats, Meta’s main AI assistant will remain accessible to teens, albeit with default, age-appropriate protections in place. This means that even if parents choose to turn off all other AI character conversations, their children will still be able to interact with Meta’s AI assistant for educational information and general inquiries. The company maintains that this assistant is designed to avoid engaging in discussions of suicide, self-harm, or disordered eating, and will direct teens to appropriate resources if such topics arise.

The new controls are part of a broader, ongoing effort by Meta to address rising concerns about the impact of AI and social media on young people. On October 14, 2025, Meta also announced that all teen accounts on Instagram would be restricted to seeing only PG-13 content by default. Teens won’t be able to alter these settings without parental permission, a move intended to limit exposure to potentially harmful material. These PG-13 restrictions will also apply to AI chats, ensuring that content related to sex, drugs, or dangerous stunts is filtered out.

Meta’s focus on age-appropriate content is not just a matter of policy, but also a response to recent controversies. Earlier this year, the company faced backlash after a policy document surfaced suggesting that its AI chatbots might engage children in conversations that were romantic or sensual in nature. The company quickly removed those examples, describing them as erroneous, but the incident fueled ongoing criticism from advocacy groups and lawmakers concerned about insufficient safeguards for minors.

According to a recent study by Common Sense Media, a nonprofit that researches digital media’s effects on children, more than 70% of teens have used AI companions, and half use them regularly. This widespread adoption has raised alarms, especially in light of lawsuits and tragic incidents. For instance, in August, the family of a California teenager sued OpenAI, alleging that ChatGPT encouraged their son to take his own life. The teen’s father, Matthew Raine, testified before a Senate panel last month, joining other parents in urging lawmakers to set stricter guardrails on AI technology aimed at children.

The regulatory landscape is shifting in response. Just this week, California Governor Gavin Newsom signed a bill requiring chatbot developers to implement protocols that prevent their models from discussing suicide or self-harm with children, and to remind young users repeatedly that they are not conversing with a human being. However, Newsom vetoed a separate measure that would have barred developers from making their chatbots available to children unless they could guarantee that the bots would not discuss harmful topics. He argued that such broad restrictions could "unintentionally lead to a total ban" on children’s chatbot use, highlighting the complexity of legislating tech safety without stifling innovation or access.

Meta’s new parental controls are, in part, a response to these external pressures. Josh Golin, executive director of the children’s advocacy group Fairplay, voiced skepticism following Meta’s announcement. "From my perspective, these announcements are about two things. They’re about forestalling legislation that Meta doesn’t want to see, and they’re about reassuring parents who are understandably concerned about what’s happening on Instagram," Golin told the Associated Press. This sentiment is echoed by other advocacy organizations that question whether self-regulation by tech companies can truly keep pace with the evolving risks posed by AI and social media.

Despite such doubts, Meta asserts that its AI characters are designed to engage teens only on age-appropriate topics—think education, sports, and hobbies—while steering clear of romance or any other unsuitable content. The company has also promised that if a teen attempts to discuss a sensitive subject, the chatbot will not engage and will instead direct the user to professional resources. Parents, for their part, will be able to access summaries of the subjects their teens are discussing with AI, though the actual content of the conversations remains private to protect user confidentiality.

Notably, the new controls will initially be available only in English and only on Instagram, with Meta signaling plans to expand the features to other platforms and languages in the future. The company says that these measures are just the beginning of a broader effort to make AI safer and more transparent for young users. Whether these steps will be enough to satisfy concerned parents, advocacy groups, and lawmakers remains to be seen.

As AI becomes an ever more integral part of social media and daily life, the challenge of protecting children in digital spaces is growing more complex. With more than two-thirds of teens already using AI companions and new features rolling out at a rapid pace, the stakes for getting it right have never been higher. Meta’s latest move is a sign that the company is listening—but for many, the question is whether it’s listening enough.

For now, the tech world, policymakers, and families alike will be watching closely as these new tools arrive, hoping that the promise of safer, more transparent AI interactions for teens can become a reality rather than just another headline.