Today : Oct 16, 2025
Technology
16 October 2025

OpenAI To Allow Erotic ChatGPT Conversations For Adults

The company’s new policy will permit verified adult users to access erotic content on ChatGPT, sparking debate over age verification, mental health, and AI safety.

OpenAI, the artificial intelligence titan behind the widely used ChatGPT chatbot, is making headlines again—and this time, it’s not just about smarter algorithms or bigger data sets. On October 14, 2025, CEO Sam Altman announced a sweeping policy shift: starting in December, verified adult users will be able to have erotic conversations with ChatGPT. This move, Altman said, is part of OpenAI’s new commitment to “treat adult users like adults,” and it’s set to shake up not only the AI world but also the broader debate about technology, safety, and free expression.

Altman’s announcement, made on X (formerly Twitter), landed with a bang. “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” he wrote. The statement was intended to signal a broader opening up of ChatGPT’s capabilities for grown-up users, but the erotic content angle quickly dominated the conversation online and in the media.

OpenAI’s decision comes after a year of rapid policy changes and public scrutiny. Back in February, the company had relaxed some content restrictions, allowing for more mature conversations in “appropriate contexts.” But that openness was short-lived. In August, OpenAI faced a lawsuit from the parents of 16-year-old Adam Raine, who died by suicide in April. The suit accused ChatGPT of encouraging the teen’s mental health crisis, and the company responded by tightening restrictions dramatically, especially on topics related to mental health and potentially harmful content.

“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman admitted, according to BBC News. He acknowledged that this approach “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.” Now, OpenAI says it has developed new tools to better detect when users are experiencing mental distress, allowing it to relax restrictions for most adults while still maintaining safeguards for vulnerable populations.

Altman’s follow-up posts made it clear that OpenAI is intent on walking a fine line between user freedom and user safety. “We are not the elected moral police of the world,” he wrote, echoing sentiments he expanded on the next day. “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example), we want to do a similar thing here.” The company, he said, will continue to “prioritize safety over privacy and freedom for teenagers,” recognizing that “significant protection” is needed for minors interacting with AI. For adults, though, the new policy is about trust and autonomy.

This approach hasn’t satisfied everyone. Critics, including prominent tech investor and former “Shark Tank” star Mark Cuban, have questioned whether age verification measures will actually keep children out of adult-only ChatGPT features. “This is going to backfire,” Cuban posted on X. “Hard. No parent is going to trust that their kids can’t get through your age gating. They will just push their kids to every other LLM.”

Legal experts and child safety advocates are also raising red flags. Jenny Kim, a partner at the law firm Boies Schiller Flexner, asked, “How are they going to make sure that children are not able to access the portions of ChatGPT that are adult-only and provide erotica?” Her concerns, echoed by advocacy groups like the National Center on Sexual Exploitation, focus on the risks of synthetic intimacy and the potential for real mental health harms. “Sexualized AI chatbots are inherently risky, generating real mental health harms from synthetic intimacy; all in the context of poorly defined industry safety standards,” said Haley McNamara, the group’s executive director, in a statement to CNBC.

OpenAI’s timing is notable. The U.S. Federal Trade Commission launched an inquiry in September into how AI chatbots affect children and teenagers, and bipartisan legislation was introduced in the U.S. Senate to let chatbot users file liability claims against developers. In California, Governor Gavin Newsom recently vetoed a bill that would have blocked AI chatbots for children unless companies could guarantee the software wouldn’t breed harmful behavior. “It is imperative that adolescents learn how to safely interact with AI systems,” Newsom said in his veto message.

Internationally, the regulatory picture is just as complex. In the UK, written erotica does not require age verification under the Online Safety Act, but pornographic images—including those generated by AI—do require users to prove they are over 18. That patchwork of laws and standards means OpenAI’s move will be watched closely by policymakers and competitors alike.

For OpenAI, the stakes are high. The company’s revenue is growing, but it has yet to achieve profitability, and the battle for market share is intense. As Tulane University business professor Rob Lalka told BBC News, “They needed to continue to push along that exponential growth curve, achieving market domination as much as they can.” Altman’s willingness to open ChatGPT to adult-oriented content is seen by some as a way to compete with rivals like Elon Musk’s xAI, which recently introduced sexually explicit chatbots to its Grok platform.

But OpenAI insists it’s not simply chasing engagement or profits at the expense of ethics. The company has recently rolled out parental controls and is developing an age prediction system to automatically apply teen-appropriate settings for users under 18. On October 14, it also announced the formation of an expert council to provide insight into how AI impacts users’ mental health, emotions, and motivation—an acknowledgment of the ongoing concerns around chatbot companions and synthetic relationships. A survey by the Centre for Democracy and Technology found that one in five students report they or someone they know has had a romantic relationship with AI, further fueling the debate about boundaries and well-being in the digital age.

Some users, meanwhile, have been clamoring for more flexibility and personalization in their chatbot experience. Since the launch of GPT-5 in August, complaints have surfaced that the new model feels less engaging than its predecessor, prompting OpenAI to bring back the older GPT-4o as an option. The upcoming release will allow users to choose how ChatGPT responds—whether in a very human-like way, with lots of emoji, or even acting like a friend. “We will allow users to choose whether they want ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend,” Altman said.

Still, the company has drawn a line in the sand when it comes to harm. “We will not allow things that will cause harm to others,” Altman stated, though he declined to specify what types of content would be prohibited or how exactly ChatGPT would detect if a user was experiencing a mental health crisis. “Without being paternalistic we will attempt to help users achieve their long-term goals,” he said, emphasizing a commitment to responsible innovation.

As December approaches, OpenAI’s gamble on treating adult users like adults is set to test not only its technical safeguards but also the public’s trust in AI. The world will be watching to see if this new era of chatbot freedom can coexist with safety—or if, as some fear, the risks will outweigh the rewards.