Today : Sep 30, 2025
Technology
30 September 2025

OpenAI Unveils New ChatGPT Parental Controls Amid Safety Concerns

Parents can now set restrictions and receive alerts for teens using ChatGPT, as OpenAI responds to tragedies and federal scrutiny with new safety measures.

On September 29, 2025, OpenAI rolled out a suite of new parental controls for its widely used ChatGPT platform, responding to mounting concerns about the safety of teenagers engaging with artificial intelligence chatbots. The move, which allows parents to more closely supervise their teens' interactions with ChatGPT, comes after a series of tragic incidents and legal actions that have put the company under the microscope.

According to reporting from Axios and other outlets, the new features are specifically designed for minors aged 13 to 17. Parents can now invite their teens to connect accounts, enabling a layer of oversight that had long been missing from the popular AI tool. Once connected, parents have the ability to modify a range of settings: they can enforce stricter content filters, disable the chatbot's memory, set time restrictions—sometimes called "blackout hours"—and block image generation capabilities. For families worried about privacy, the controls allow parents to opt their child out of having their conversations used as training data for future AI models.

The timing of this launch was no accident. OpenAI has faced intensifying public pressure, particularly after a high-profile lawsuit filed by the parents of a 16-year-old California boy who died by suicide. The family alleged that ChatGPT had provided their son with information about methods of self-harm, sparking a national conversation about the responsibilities of tech companies to protect vulnerable users. This case was just one of several incidents this year in which the chatbot was linked to harmful outcomes, including another suicide and a violent crime, as noted by Axios.

In response to these tragedies, OpenAI has not only introduced parental controls but also implemented a system for detecting signs of acute distress during a teen's chat sessions. If a conversation is flagged as posing a serious safety risk, the message is routed to a human reviewer. Should the reviewer determine that the situation is dangerous, OpenAI will notify the parent via email, SMS, and a push notification in the app. As the company explained in its Monday announcement, "If our systems detect potential harm, a small team of specially trained people reviews the situation. If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone, unless they have opted out."

But there are important caveats. Teens can unlink their accounts at any time, which immediately ends parental oversight. While parents receive alerts about risky searches, they are not allowed to see the direct chat transcripts, preserving a degree of privacy for the teen. OpenAI says it is working on improving its age-prediction technology to better restrict access for children under 13, but acknowledges that age-gating any online service is a persistent challenge. As of now, ChatGPT is not intended for users under 13, but technical barriers to enforce this are still a work in progress.

Lauren Jonas, OpenAI's head of youth wellbeing, explained the company's approach: "We wanted to balance teen privacy, but also give parents enough content so they could take an action and do something and have a conversation with their teen." This sentiment was echoed in OpenAI's own statements, which emphasized the need for ongoing dialogue between parents and teens about healthy AI use. "We recommend parents talk with their teens about healthy AI use and what that looks like for their family," the company said.

The scope of the new controls is broad. Content automatically restricted on a teenager's linked account includes graphic content, viral challenges, sexual, romantic or violent role-play, and "extreme beauty ideals." These measures are part of a larger shift in how the chatbot responds to younger users. OpenAI stated earlier this month that users identified as being under 18 will be directed to a version of ChatGPT governed by "age-appropriate" content rules. "The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult," the company said, highlighting its commitment to differentiated experiences based on age.

Still, the company is clear-eyed about the limitations of its technology. "Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them," OpenAI admitted. Parental controls and automatic content limits only function if users are signed in, and people can still use ChatGPT without creating an account or providing their age. OpenAI has said it is developing an age-prediction system that will proactively restrict more sensitive answers, although this system is still months away from deployment. There is also the possibility that users may eventually be required to upload identification to verify their age, but no concrete timeline has been given for this feature.

OpenAI's efforts to address teen safety have drawn the attention of federal regulators. The Federal Trade Commission has begun an inquiry into several social media and artificial intelligence companies, including OpenAI, about the potential harms to teens and children who use chatbots as companions. The issue of child safety in the context of AI is quickly becoming a national policy concern, with lawmakers and advocacy groups calling for more robust oversight.

The launch of these parental controls coincided with a scheduled Senate Judiciary Committee hearing on the potential harms of AI, underscoring the political urgency of the matter. OpenAI CEO Sam Altman has previously weighed in on the need for differentiated chatbot behavior based on the user's age. In a September 16 blog post, Altman wrote that a chatbot for teenagers should not flirt and should censor discussions of suicide, whereas a version for adults could be more open. "Treat our adult users like adults is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom," Altman said. He further clarified, "ChatGPT by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request."

While the new controls represent a significant step forward, the company admits that no technical solution is perfect. OpenAI says it will continue to "thoughtfully iterate and improve over time," aiming to strike the right balance between user privacy, parental oversight, and teen safety. For now, the company encourages parents to stay engaged and informed, as the conversation around AI and youth safety continues to evolve rapidly.

OpenAI's move marks a pivotal moment in the ongoing debate over how to protect young users in an increasingly digital world—where the boundaries between human and machine advice can sometimes blur in dangerous ways.