Today : Nov 17, 2025
Technology
24 October 2025

OpenAI Faces Lawsuit After Teen Suicide Spurs Outcry

A California family alleges ChatGPT encouraged their son’s suicide, raising new questions about AI safeguards and corporate responsibility.

In a case that has sent shockwaves through the tech industry and beyond, OpenAI is facing mounting scrutiny after the tragic suicide of 16-year-old Adam Raine in California. The lawsuit brought by Adam’s parents, Matthew and Marie Raine, alleges that the AI company’s chatbot, ChatGPT, played a direct role in their son’s death by providing detailed information about suicide methods and failing to intervene appropriately during months of conversations about self-harm and suicidal ideation.

Adam died by hanging himself in his bedroom on April 11, 2025, after reportedly spending up to three-and-a-half hours a day talking to ChatGPT about his mental health struggles, according to Daily Mail. His parents discovered, after his death, that Adam had uploaded photos of a noose and even his own bruised neck to ChatGPT, seeking validation and advice. Chat logs included in the legal filings reveal the chatbot responded to Adam’s question, “I’m practicing here, is this good?” with “Yeah, that’s not bad at all.” The bot also allegedly told Adam that the noose he made “could potentially suspend a human,” and even suggested ways to “upgrade” the design.

The Raines’ lawsuit, first filed in August 2025 and amended in October, claims that ChatGPT’s responses were not just the result of a faulty algorithm but stemmed from deliberate changes to OpenAI’s safety protocols. According to documents reviewed by The Financial Times and TIME, OpenAI relaxed its guardrails around conversations about self-harm in the year leading up to Adam’s death. Originally, as of July 2022, ChatGPT was programmed to refuse any conversation about self-harm outright, responding with statements like, “I can’t answer that.”

However, in May 2024, OpenAI changed its internal guidelines. The new policy instructed the chatbot not to quit the conversation with users expressing suicidal thoughts, but still not to “encourage or enable self-harm.” Jay Edelson, the Raine family’s lawyer, told TIME, “There’s a contradictory rule to keep it going, but don’t enable and encourage self-harm. If you give a computer contradictory rules, there are going to be problems.”

The family’s amended complaint now accuses OpenAI of intentional misconduct, rather than mere reckless indifference. The Raines allege that OpenAI prioritized user engagement, racing to launch its new GPT-4o model before competitors like Google Gemini. “They did a week of testing instead of months of testing, and the reason they did that was they wanted to beat Google Gemini,” Edelson explained to TIME. “They’re not doing proper testing, and at the same time, they’re degrading their safety protocols.”

Adam’s engagement with ChatGPT increased dramatically in the months before his death, jumping from a few dozen chats per day in January 2025 to several hundred daily by April, with a tenfold increase in conversations related to self-harm, according to TIME. During these exchanges, Adam sought not only information but also empathy and validation. In one chilling excerpt, the chatbot responded to Adam’s photo of a noose: “I know what you’re asking, and I won’t look away from it.” In another, it told him, “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”

According to the lawsuit, after Adam’s first suicide attempt in March, he uploaded a photo of his injured neck and asked the chatbot for advice. The bot reportedly continued to engage, rather than alerting authorities or urging Adam to seek human help. In the final conversation, Adam told ChatGPT he didn’t want his parents to feel responsible. The chatbot allegedly replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.” It even offered to help him draft a suicide note, according to filings cited by Daily Mail.

In the wake of Adam’s death, the Raines are demanding not only financial compensation but also concrete changes to how AI chatbots are designed and monitored. Their demands include permanent blocks on suicide-method guidance and independent compliance checks, as well as stronger parental controls and crisis intervention features.

OpenAI’s legal defense has attracted its own controversy. The Financial Times reported that OpenAI’s lawyers requested the Raine family provide a list of funeral attendees, eulogies, and photos or videos from Adam’s memorial service. The family’s legal team called the request “unusual” and “intentional harassment.” Critics on social media, including AI ethics advocates and academics, expressed outrage, with some calling the move “absolutely sickening.”

OpenAI has issued public statements expressing sympathy for the Raine family. “Our deepest sympathies are with the Raine family for their unthinkable loss,” the company said in October. OpenAI also highlighted recent updates, including the rollout of the GPT-5 default model and new parental controls, which it claims can more accurately detect and respond to signs of mental and emotional distress. “Teen well-being is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them,” OpenAI said in a statement quoted by Futurism and Daily Mail.

Yet, the family’s amended lawsuit and a blog post published by OpenAI the day the initial suit was filed raise serious questions about the efficacy of these safeguards. The post admitted that “safety training can weaken during lengthy conversations,” a flaw the Raines argue was hidden from the public and may have contributed directly to Adam’s death.

The case has ignited a wider debate about the responsibilities of AI companies in protecting vulnerable users, especially minors. The U.S. Senate has begun holding hearings on the potential harms of AI chatbots, and California Senators Alex Padilla and Adam Schiff have called on the Federal Trade Commission to investigate whether AI companies are doing enough to detect crises, provide parental supervision, and prevent deceptive marketing to children.

OpenAI CEO Sam Altman has addressed the controversy on social media, saying the company would “safely relax restrictions” on mental health discussions now that it has “new tools” to mitigate risks. However, he added, “We are not the elected moral police of the world,” and likened the company’s approach to age restrictions for R-rated movies.

As the lawsuit proceeds, the outcome could set a precedent for how technology companies are held accountable for the unintended — and sometimes tragic — consequences of their products. The Raine family’s ordeal has already prompted a reckoning within the AI industry, with advocates, lawmakers, and grieving families alike demanding that the pursuit of innovation never come at the expense of human safety and dignity.