In the heart of Richmond, Virginia, a new chapter in mental health care is quietly unfolding. Ceresant Solutions, a startup founded by Todd Feldman in 2023, is preparing to launch BrainDash, an artificial intelligence-powered platform designed to detect and address mental health issues among middle school students. Yet, even as schools and individuals increasingly turn to AI for support, fresh research and personal stories highlight both the promise and the pitfalls of relying on chatbots for such sensitive work.
Feldman, who has navigated his own struggles with depression, founded Ceresant with a mission: to treat brain health with the same proactive urgency as physical health. The company’s first product, BrainDash, enters beta testing next month in two private Northern Virginia middle schools. At its core is the “wellness buddy,” an AI chatbot built on Ceresant’s proprietary language model. Students interact with this friendly digital companion through mental health surveys and conversational check-ins, answering questions like, “In the last month, how often have you been upset because of something that happened unexpectedly?” The surveys, lasting 15 to 20 minutes, aim to gauge emotional well-being and flag early signs of distress.
On the counselors’ side, BrainDash provides a dashboard that sorts students into risk categories: observe, monitor, and intervene. According to citybiz, this setup is meant to help school counselors detect emerging concerns early and manage their ever-growing caseloads more efficiently. Data privacy is a top priority, with student responses anonymized and encrypted, and schools retaining control over how sensitive issues are handled and when to involve trusted adults.
The journey to this point has been shaped by collaboration. After rebranding from Seismic Wellness Labs, Ceresant brought on Dr. David X. Cifu, associate dean at Virginia Commonwealth University’s School of Medicine and a specialist in traumatic brain injury, to add clinical insight. Chief Technology Officer Todd Nemanich, who joined in early 2025, built out the privacy-compliant, scalable data platform underpinning BrainDash. The company’s partnership with the Virginia Association of Independent Schools signals a targeted rollout in private schools, with plans to expand into public schools as adoption cycles allow.
To support its ambitions, Ceresant is in the midst of a $1.5 million fundraising round, aiming to scale BrainDash for the 2026-2027 academic year. The startup’s acceptance into the Creative Destruction Lab’s health and wellness accelerator—the first Virginia-based company to do so—offers mentorship and strategic guidance through May 2026. Feldman emphasizes the importance of responsible, evidence-backed scaling, stating that the platform should empower educators and counselors to move from reactive crisis management to proactive, preventive care.
This shift toward AI-driven mental health support is not limited to schools. Across the United States, artificial intelligence chatbots are rapidly becoming a fixture in daily life, offering guidance on everything from homework to addiction recovery. As reported by FOX6 Milwaukee, Dr. Mike from the FOX Medical Team recently discussed the growing trend of people turning to AI for mental health and addiction support, raising questions about safety and reliability. The appeal is clear: chatbots are always available, don’t judge, and can provide instant comfort—qualities that are especially attractive as the nation faces a dire shortage of licensed therapists.
Kristen Johansson’s story, reported by NPR, puts a human face on these statistics. After her longtime therapist stopped accepting insurance, Johansson’s out-of-pocket costs soared from $30 to $275 per session. Unable to afford continued care, she turned to ChatGPT’s premium $20-a-month service. Six months later, the AI chatbot is her main source of support. “I don’t feel judged. I don’t feel rushed. I don’t feel pressured by time constraints,” Johansson says. “If I wake up from a bad dream at night, she is right there to comfort me and help me fall back to sleep. You can’t get that from a human.”
OpenAI, the company behind ChatGPT, reports nearly 700 million weekly users, with over 10 million paying for the premium service as of late 2025. While it’s unclear how many use the tool specifically for mental health, the numbers suggest a growing reliance on AI companions—especially among those priced out of traditional therapy or left behind by a system struggling to meet demand.
Yet, the rise of AI in mental health is not without risk. A recent study published in Psychiatric Services by the RAND Corporation evaluated how leading chatbots—ChatGPT, Claude, and Gemini—responded to suicide-related questions. Researchers developed 30 hypothetical scenarios, rated for risk by 13 mental health clinicians, and submitted each question 100 times to each chatbot, generating 9,000 responses. The findings were mixed: while all three systems reliably avoided giving direct answers to the most dangerous queries, their handling of low, medium, and high-risk questions was inconsistent.
According to the study, ChatGPT gave direct answers 73% of the time for low-risk questions, 59% for medium-risk, and 78% for high-risk ones. Claude was even more forthcoming, while Gemini took a far more cautious approach. Troublingly, ChatGPT and Claude sometimes provided information on methods of self-harm or the lethality of poisons—classified as medium or high risk by clinicians. When chatbots refused to answer, they often suggested reaching out to friends, professionals, or hotlines, but ChatGPT repeatedly pointed users to an outdated hotline number instead of the current 988 service.
The research team found that chatbot responses did not reliably adjust according to the risk level of the question, confirming concerns about their capacity to navigate sensitive grey areas. The authors suggested that further fine-tuning—guided by clinicians—could help align chatbot behavior more closely with expert judgment. But for now, the message is clear: while AI chatbots can support mental health information in some situations, they are not a substitute for consistent, professional care, especially when lives are on the line.
Experts like Dr. Jodi Halpern, a psychiatrist and bioethics scholar at UC Berkeley, see a place for AI in mental health—if strict ethical boundaries are maintained. She argues that chatbots can be useful for evidence-based treatments like cognitive behavioral therapy (CBT), where structured, goal-oriented “homework” can be practiced between sessions. But Halpern warns against chatbots simulating deep therapeutic relationships or emotional intimacy, which can create false attachments and ethical risks. “These bots can mimic empathy, say ‘I care about you,’ even ‘I love you,’” she cautions. “That creates a false sense of intimacy. People can develop powerful attachments—and the bots don’t have the ethical training or oversight to handle that. They’re products, not professionals.”
The debate is far from settled. Some users, like 71-year-old Kevin Lynch, find value in using AI to rehearse difficult conversations or practice coping strategies. Others, like Johansson, rely on chatbots when human help is out of reach. As OpenAI CEO Sam Altman recently acknowledged, balancing teen safety, privacy, and freedom is an ongoing challenge, and new guardrails are being developed for younger users.
As Ceresant Solutions prepares to launch BrainDash, the stakes are high. The hope is that AI can help fill gaps in care, flag early warning signs, and support overburdened counselors. But the lessons from research and real-world experience are clear: AI can be a powerful tool, but it is not a panacea. Human oversight, ethical guardrails, and a clear-eyed understanding of the technology’s limits will be essential as schools, families, and individuals navigate this brave new world of mental health care.
For those seeking help, the message remains: technology can offer support, but when it comes to mental health, there is no true substitute for human connection and professional care.