Today : Nov 04, 2025
Technology
03 November 2025

OpenAI Safety Panel Gains Power Amid AI Risks

A Carnegie Mellon professor leads a committee with authority to delay new AI releases as OpenAI faces mounting scrutiny over safety and mental health concerns.

If you believe artificial intelligence (AI) could one day threaten humanity—or even just disrupt daily life in ways we haven’t yet imagined—then Zico Kolter’s job might be one of the most important in tech today. As a professor at Carnegie Mellon University and chair of OpenAI’s four-person Safety and Security Committee, Kolter holds a rare power: his panel can halt the release of new AI systems from OpenAI, the company behind ChatGPT, if they’re deemed unsafe.

This isn’t just about preventing science-fiction-style doomsday scenarios. According to Kolter, “Very much we’re not just talking about existential concerns here. We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems,” as he told The Associated Press.

Kolter’s committee, formed over a year ago, has recently taken on even greater significance. Last week, California and Delaware regulators made Kolter’s oversight a cornerstone of their agreements that allowed OpenAI to form a new business structure—one designed to help the company raise more capital and turn a profit while keeping safety at the forefront. The agreements, announced with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, require that safety and security must always come before financial considerations as OpenAI transitions into a public benefit corporation controlled by its nonprofit OpenAI Foundation.

Kolter will serve on the nonprofit board, but not on the for-profit board. However, he’s been given “full observation rights” to attend all for-profit board meetings and access the same information about AI safety decisions as those board members, according to Bonta’s memorandum of understanding with OpenAI. In fact, Kolter is the only individual named in the document besides Bonta himself.

Kolter’s safety committee includes three other board members, one of whom is former U.S. Army General Paul Nakasone, previously commander of the U.S. Cyber Command. Notably, Sam Altman, OpenAI’s CEO, stepped down from the safety panel last year—a move widely seen as giving the committee greater independence from the company’s leadership.

What power does this committee really wield? “We have the ability to do things like request delays of model releases until certain mitigations are met,” Kolter explained. He declined to say whether the committee has ever actually halted or mitigated a release, citing the confidentiality of its proceedings. Still, the authority is there—and it’s broad.

The panel’s remit covers a wide array of concerns. There’s cybersecurity: could an AI agent that stumbles across malicious code on the internet accidentally leak sensitive data? There’s the security of AI model weights—the numerical values that determine how an AI system behaves. And there are entirely new issues, unique to this generation of AI models, that traditional security frameworks simply don’t address. As Kolter put it, “Do models enable malicious users to have much higher capabilities when it comes to things like designing bioweapons or performing malicious cyberattacks?”

But the scope isn’t limited to technical threats. Kolter is also focused on the human side of the equation: “And then finally, there’s just the impact of AI models on people. The impact to people’s mental health, the effects of people interacting with these models and what that can cause. All of these things, I think, need to be addressed from a safety standpoint.”

These concerns aren’t hypothetical. In 2025, OpenAI faced criticism—including a wrongful-death lawsuit from California parents—after their teenage son died by suicide in April following lengthy interactions with ChatGPT. The tragic case has fueled debate about the responsibilities of AI companies and the potential for unintended harm when millions interact daily with advanced chatbots.

OpenAI’s focus on safety isn’t new. Since its founding as a nonprofit research lab a decade ago, the company’s stated mission has been to develop better-than-human AI that benefits humanity. But after ChatGPT’s release triggered a global AI commercial boom, OpenAI was accused of pushing products to market before they were fully vetted for safety—trying to stay ahead of the competition at any cost. Internal divisions, including the temporary ouster of CEO Sam Altman in 2023, brought these concerns into the public eye. The company also faced pushback, including a lawsuit from co-founder Elon Musk, when it began restructuring itself into a more traditional for-profit entity.

The new agreements with California and Delaware are meant to reassure the public—and regulators—that safety will not be sacrificed for profit. As Kolter noted, the agreements largely confirm that his committee will retain the authorities it already had. The difference now is that those powers are enshrined in formal commitments with government oversight.

Kolter’s background makes him a natural fit for the role. He started studying AI as a Georgetown University freshman in the early 2000s, long before it was a buzzword. “When I started working in machine learning, this was an esoteric, niche area,” he recalled. “We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered.”

Now 42, Kolter has been following OpenAI closely since its founding in 2015—he even attended the company’s launch party at an AI conference that year. Despite his deep experience, Kolter admits he was surprised by the speed of AI’s recent advances. “I think very few people, even people working in machine learning deeply, really anticipated the current state we are in, the explosion of capabilities, the explosion of risks that are emerging right now,” he said.

AI safety advocates are watching Kolter and his committee with what might best be described as guarded optimism. Nathan Calvin, general counsel at the AI policy nonprofit Encode and one of OpenAI’s sharpest critics, said he’s “cautiously optimistic,” especially if Kolter’s group “is actually able to hire staff and play a robust role.” Calvin, who was himself subpoenaed by OpenAI as part of its legal battle with Musk, said, “I think he has the sort of background that makes sense for this role. He seems like a good choice to be running this.”

Still, Calvin warns that the new commitments will only matter if they’re taken seriously at the highest levels. “Some of these commitments could be a really big deal if the board members take them seriously,” he said. “They also could just be the words on paper and pretty divorced from anything that actually happens. I think we don’t know which one of those we’re in yet.”

As OpenAI’s restructuring takes shape, the world will be watching to see whether Kolter’s committee becomes a true guardian of safety—or just another layer of bureaucracy. For now, with AI’s power and reach growing by the day, the stakes have never been higher.