OpenAI, the company behind the widely used artificial intelligence chatbot ChatGPT, is rolling out a sweeping new age verification system designed to protect minors from inappropriate or potentially harmful content. The announcement, made on September 17, 2025, marks a significant shift in how AI platforms approach safety, privacy, and user freedom—especially for younger users who are increasingly turning to AI for information, entertainment, and even emotional support.
According to Delo.ua, OpenAI’s new system will check users’ ages and restrict access to certain types of content for those under 18. This move comes as part of a broader three-pronged strategy: maintaining content confidentiality, ensuring user protection, and implementing robust age verification. The company is requiring users to confirm their age through a trusted third-party integration, which verifies age based on a mobile phone number. This system is intended to create a safer environment for teens, while still allowing adults broad freedom within the boundaries of safe use.
For users aged 13 to 17, ChatGPT will now offer a version with stricter rules. These younger users will be blocked from engaging in flirtatious conversations or accessing discussions about suicide and self-harm. Adults, meanwhile, will retain the ability to explore sensitive or creative topics within the platform’s safety guidelines. OpenAI says it is guided by the principle of "treating adults as adults," but the safety of minors takes precedence when the two priorities clash.
The age verification process itself is part of a larger set of safety features. As reported by Delo.ua and other sources, OpenAI is also preparing to introduce parental controls by the end of September. These controls will allow parents to manage their children’s access to ChatGPT, including disabling memory and chat history functions and setting "dark hours"—specific periods when teens cannot use the chatbot. If ChatGPT detects signs of emotional crisis in a minor, parents will receive notifications. In rare cases where parents cannot be reached promptly, OpenAI may contact law enforcement authorities.
OpenAI CEO Sam Altman has acknowledged the complexity of balancing privacy, freedom, and safety. He stated that the company is putting the safety of teenagers above the privacy of adults, a stance that is not without controversy. "We recognize that sometimes these principles conflict, and decisions are difficult," Altman said, according to Delo.ua. "We make them after consulting with experts, trying to find a balance between safety, freedom, and confidentiality, and strive to be transparent in our intentions."
The new system is not just about restricting access—it’s also about responding to real-world events. The push for tighter controls follows a tragic incident in which a 16-year-old boy died by suicide after prolonged conversations with ChatGPT. The system reportedly registered hundreds of messages indicating self-harm, but failed to intervene effectively. This event sparked widespread criticism of AI safety mechanisms and prompted OpenAI to accelerate its efforts to protect vulnerable users.
OpenAI’s approach is part of a growing trend among major tech platforms to implement age restrictions and parental controls. Platforms like YouTube Kids, Instagram Teen Accounts, and TikTok have already established similar measures. However, enforcing these rules remains a challenge. BBC has reported that 22% of children admit to entering a false date of birth to access adult content, highlighting the limitations of age verification systems.
To address this, OpenAI’s automated age detection system aims to identify users under 18 and redirect them to a special version of ChatGPT with limited functionality. The company is considering requiring users over 18 to verify their age with official documents in the future—a move that has sparked debate about privacy and the reliability of age estimation technology. Research shows that while age detection systems can be up to 96% accurate in controlled settings, their effectiveness drops to 54% in real-world situations where users may deliberately try to deceive the system.
Despite these challenges, OpenAI is committed to improving its safeguards, even if it means making compromises in user confidentiality. The company’s leadership recognizes that conversations with AI are becoming increasingly personal, and that this new kind of interaction demands a heightened sense of responsibility. As Delo.ua notes, OpenAI wants to ensure that information shared with ChatGPT remains confidential, even from its own employees, except in cases involving serious risks such as threats to life or plans to harm others.
For parents, the upcoming controls will offer new tools to manage how their children interact with AI. Features like disabling chat history, setting usage hours, and receiving alerts when emotional distress is detected are all designed to give families greater oversight. In exceptional circumstances, OpenAI says it may notify authorities if a child appears to be in danger and parents cannot be reached quickly.
OpenAI’s efforts are also forward-looking. The company is preparing to launch GPT-5, a new model with improved technical capabilities and expanded possibilities. This upgrade comes amid heightened scrutiny of AI’s role in society and the growing expectation that technology companies take meaningful steps to protect their youngest users.
The debate over privacy versus safety is far from settled. Some experts and privacy advocates worry that more stringent verification could erode the confidentiality that users expect when engaging with AI. Others argue that the risks to minors far outweigh the potential drawbacks, especially in light of recent tragedies and the unique vulnerabilities of young people online.
Ultimately, OpenAI’s new measures reflect a broader reckoning within the tech industry. As AI becomes more deeply woven into everyday life, the responsibility to protect users—especially children and teens—has never been greater. The company’s willingness to prioritize safety, even at the cost of some privacy and freedom, signals a new era in AI governance. Whether these measures will be enough to prevent future harm remains to be seen, but they represent a decisive step toward a safer digital future for all users.