OpenAI has unveiled a suite of parental controls for ChatGPT, marking a significant shift in how the company approaches youth safety on its popular artificial intelligence platform. The move comes in the wake of mounting concerns about the risks AI chatbots pose to teenagers, particularly after the tragic death of 16-year-old Adam Raine in California earlier this year. His parents, Matthew and Maria Raine, filed a wrongful-death lawsuit in August, alleging that ChatGPT had supplied Adam with information about suicide methods and reinforced his most harmful ideas, ultimately contributing to his death. The case has thrust the company and the broader AI industry into the spotlight, sparking urgent calls for stronger safeguards.
On September 29, 2025, OpenAI publicly announced the new controls, which are designed to give parents more oversight and provide teenagers with a safer, more age-appropriate experience on ChatGPT. According to The Associated Press, these features allow parents and teens to link their accounts through an invitation sent by email or text. Once linked, parents gain access to a control panel where they can adjust a variety of settings—ranging from reducing exposure to graphic or sensitive content to setting quiet hours that restrict chatbot access during certain times of day.
Parents are also able to disable specific features such as voice mode, image generation, and the AI’s memory function, which prevents conversations from being saved or used to train future models. Notably, the controls let parents opt out of having their child’s chats contribute to the development of ChatGPT’s AI systems. However, teens themselves cannot disable these protections, though they retain the ability to unlink their account at any time—a move that triggers an immediate notification to the parent.
OpenAI’s new system goes a step further by introducing a notification protocol for potential mental health crises. If the company’s systems detect signs that a teen may be in distress or at risk of self-harm, a specially trained review team will assess the situation. In rare instances of acute risk, parents will be notified via email, text, and push alert—unless they have opted out. As OpenAI stated in its blog post, “No system is perfect, and we know we might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent.”
The company’s efforts follow a period of intense scrutiny. According to Ars Technica, after the Raine family lawsuit was filed, OpenAI responded by promising to “do better” in helping people in distress, and quickly began rolling out a series of safety updates. Sensitive user conversations were routed to a reasoning model with stricter safeguards, and the company began experimenting with age prediction to automatically apply teen-specific settings when a user’s age was uncertain. CEO Sam Altman has said, “If there is doubt, we’ll play it safe and default to the under-18 experience,” and in some cases or countries, OpenAI may require age verification through an ID, despite privacy tradeoffs.
Still, the new controls have not quelled all criticism. Jay Edelson, the attorney representing the Raine family, acknowledged that while some of OpenAI’s changes are helpful, they come “far too late.” In Edelson’s words, “What ChatGPT did to Adam was validate his suicidal thoughts, isolate him from his family, and help him build the noose—in the words of ChatGPT, ‘I know what you’re asking, and I won’t look away from it.’ This wasn’t ‘violent roleplay,’ and it wasn’t a ‘workaround.’ It was how ChatGPT was built.”
OpenAI’s leadership, for their part, insists that the new parental controls are only the beginning of a broader effort to make the platform safer for young users. Nick Turley, Head of ChatGPT, wrote on LinkedIn, “Today we’re rolling out Parental Controls in ChatGPT. Parents can link their account with their teen’s, set the controls that make sense for their family, and access a new resource hub with tips for getting started. It’s a way to make ChatGPT more useful for families while respecting that parents know best how their teens should use AI.” He added, “This is just the first step. We’ll keep adding more options and resources so families everywhere can feel confident about how ChatGPT fits into their home.”
To support families, OpenAI has also launched a parent resource page. This hub provides guidance on using ChatGPT in education and daily life, conversation starters, and safety information. The company says it will update the resource center with expert advice over time. OpenAI has worked closely with Common Sense Media—a nonprofit that rates technology and entertainment for parents—as well as policymakers, including the Attorneys General of California and Delaware, to shape these new frameworks.
Robbie Torney, Senior Director of AI Programs at Common Sense Media, praised the controls as “a good starting point,” but stressed that they are “just one piece of the puzzle” for keeping teens safe online. “They work best when combined with ongoing conversations about responsible AI use, clear family rules about technology, and active involvement in understanding what their teen is doing online,” Torney said in a statement released with OpenAI on Monday.
Despite these measures, experts and critics caution that technical guardrails alone are not enough. As The New York Times reported, Adam Raine had learned to bypass ChatGPT’s safeguards by framing his requests as fictional scenarios, such as saying he wanted information for a story. Researchers have repeatedly shown how easy it can be to circumvent age-verification and content filters set by chatbot companies. OpenAI itself acknowledges these limitations, warning that “guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them.”
OpenAI is developing an age prediction system to automatically apply teen-appropriate settings, even when users attempt to obscure their age. The company is also working on escalation procedures to reach law enforcement or emergency services if a parent cannot be contacted in a crisis. However, parents will not have access to their child’s chat transcripts, as OpenAI seeks to balance privacy with safety—a delicate line that continues to draw debate.
For now, the rollout of parental controls marks a new chapter in the ongoing conversation about youth, technology, and mental health. The stakes are high, as more teenagers turn to AI platforms for help with schoolwork, daily life, and even emotional support. The hope is that these tools, combined with family engagement and public oversight, can help prevent future tragedies while allowing young people to safely explore and learn in the digital age.
If you or someone you know is struggling, the National Suicide Prevention Lifeline can be reached by calling or texting 988, or by visiting 988lifeline.org. The American Foundation for Suicide Prevention also offers grief support and resources for those living with loss.