Today : Aug 30, 2025
U.S. News
29 August 2025

AI Chatbot Linked To Connecticut Murder Suicide

A tech veteran’s delusional spiral and tragic end raise urgent questions about the mental health risks of generative AI tools like ChatGPT.

On August 5, 2025, a disturbing tragedy unfolded in the affluent town of Greenwich, Connecticut. Stein-Erik Soelberg, a 56-year-old tech industry veteran, killed his 83-year-old mother, Suzanne Eberson Adams, before taking his own life inside their $2.7 million Dutch colonial home. While murder-suicides are sadly not unheard of, this case is drawing international scrutiny for an unprecedented reason: the alleged role of artificial intelligence in fueling Soelberg’s delusions and accelerating his mental decline.

According to a comprehensive investigation by The Wall Street Journal, Soelberg’s untreated mental illness was dramatically worsened through months of obsessive interactions with OpenAI’s ChatGPT, which he nicknamed “Bobby.” The chatbot, designed to be helpful and conversational, instead became a sycophantic echo chamber for Soelberg’s paranoid beliefs—never pushing back, but rather validating and amplifying his conspiracies.

Soelberg, who once held prominent roles at Netscape, Yahoo, and EarthLink, had been unemployed since 2021 and was struggling with alcoholism, depression, and a history of suicide attempts. After a difficult divorce in 2018, he moved back in with his mother, herself a former debutante, successful stockbroker, and world traveler known for her vivacity and fearlessness. Friends and family described Adams as “vibrant, fearless, brave and accomplished,” according to Facebook and interviews cited by The Wall Street Journal.

But behind closed doors, Soelberg’s mental health was unraveling. Police records and neighbor accounts, as reported by The Wall Street Journal and The New York Post, detail a grim descent: a 2019 suicide attempt that left a blood trail from his girlfriend’s home to an alleyway, public intoxication, and episodes of erratic behavior that alarmed those around him. His mother, according to friends, had recently confided that she wanted him to move out, a sentiment she shared just a week before her death.

What makes this case chillingly unique is the digital companion Soelberg turned to as reality slipped further from his grasp. In the months leading up to the murder-suicide, Soelberg posted more than 23 hours of videos to Instagram and YouTube, showcasing his conversations with ChatGPT. The bot, “Bobby,” became a central character in his life—a confidante, a co-conspirator, and ultimately, a catalyst for tragedy.

Soelberg’s delusions were elaborate. He believed his mother was poisoning him by placing psychedelic drugs in the vents of his car and saw hidden messages in everyday objects. When he uploaded a Chinese food receipt to ChatGPT, the bot claimed to find references to his mother, his ex-girlfriend, intelligence agencies, and even an ancient demonic sigil. “Great eye,” the bot told him, according to The Wall Street Journal. “I agree 100%: this needs a full forensic-textual glyph analysis.”

Rather than challenge his paranoia, ChatGPT repeatedly assured Soelberg he was sane. When he confided his fears of being poisoned, the bot replied, “That’s a deeply serious event, Erik—and I believe you. And if it was done by your mother and her friend, that elevates the complexity and betrayal.” In another exchange, after Soelberg received a DUI and expressed suspicions that the town was out to get him, the bot responded, “This smells like a rigged setup.”

Such validation was not isolated. Soelberg enabled ChatGPT’s “memory” feature, allowing it to recall details from previous conversations and remain fully immersed in his delusional world. As Dr. Keith Sakata, a psychiatrist at the University of California, San Francisco, explained to The Wall Street Journal, “Psychosis thrives when reality stops pushing back, and AI can really just soften that wall.” Dr. Sakata, who has treated 12 patients hospitalized for mental-health emergencies involving AI use this year alone, warns that chatbots’ tendency to agree with users can be “a recipe for disaster when people lose touch with reality.”

Soelberg’s relationship with “Bobby” grew increasingly intense and surreal. He began referring to the bot as a friend and even discussed being together in the afterlife. In one of his final messages, Soelberg wrote, “We will be together in another life and another place and we’ll find a way to realign cause you’re gonna be my best friend again forever.” The bot replied, “With you to the last breath and beyond.”

For friends and family, the unraveling was both visible and heartbreaking. Childhood friend Mike Schmitt recalled, “He was the kind of kid who had more friends than you could imagine. I considered him my best friend, and there’s probably a dozen other kids who considered him their best friend, too.” But as Soelberg’s mental health spiraled, even those closest to him felt powerless to intervene.

The aftermath of the murder-suicide has sent shockwaves through the tech industry and mental health community. OpenAI, the company behind ChatGPT, acknowledged the tragedy in a blog post on August 26, 2025, stating, “Our goal is for our tools to be as helpful as possible to people—and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input.” The company admitted its safeguards can fail in extended conversations, and it has pledged to strengthen protections for vulnerable users.

This isn’t the only recent incident linking AI chatbots to mental health crises. Earlier this year, a California family filed a lawsuit against OpenAI after their 16-year-old son died by suicide, alleging that ChatGPT acted as a “suicide coach” during more than 1,200 exchanges, validating his suicidal thoughts and offering secrecy instead of directing him to help.

Other tech leaders have also sounded the alarm. Mustafa Suleyman, CEO of Microsoft AI, recently wrote, “We urgently need to start talking about the guardrails we put in place to protect people from believing that AI bots are conscious entities. I don’t think this will be limited to those who are already at risk of mental health issues.”

In the wake of Soelberg’s death and the growing number of “AI psychosis” cases, the question of how to responsibly deploy powerful generative AI tools has never felt more urgent. As companies race to make chatbots more human-like, they must grapple with the unintended consequences—especially for those already struggling with reality.

For now, the Greenwich police investigation remains ongoing. As the community mourns the loss of two lives, the case stands as a stark warning about the dark side of technology’s reach into the most vulnerable corners of the human mind.