California has once again taken center stage in the ongoing debate over artificial intelligence, passing the Transparency in Frontier Artificial Intelligence Act on September 29, 2025. As the first U.S. state to enact legislation aimed at regulating the most advanced AI technologies, California’s move is being watched closely by policymakers, tech companies, and civil society around the globe. But while the law is a landmark, experts are divided: some see it as a necessary step toward accountability, others as a modest gesture that falls short of tackling the most pressing risks posed by AI’s rapid integration into society—and especially into government.
The new law, as reported by Al Jazeera, requires developers of the largest so-called "frontier" AI models—those that surpass existing benchmarks and can have significant societal impact—to publicly disclose how they have incorporated national and international safety frameworks and best practices into their development. It also mandates reporting of incidents such as large-scale cyber-attacks, deaths of 50 or more people, major financial losses, and other safety-related events caused by AI models. Whistleblower protections are included, a nod to the growing unease about the opacity of AI development.
Yet, as Annika Schoene, a research scientist at Northeastern University’s Institute for Experiential AI, told Al Jazeera, "It is focused on disclosures. But given that knowledge of frontier AI is limited in government and the public, there is no enforceability even if the frameworks disclosed are problematic." In other words, the law’s impact hinges on transparency, but lacks teeth when it comes to actual enforcement.
California’s influence on global AI governance can hardly be overstated. Home to tech giants like OpenAI and Nvidia, the state’s approach could set a precedent for both national and international regulation. Still, the law is a far cry from the sweeping European Union AI Act, which covers not just frontier models but also smaller, high-risk systems that are already being used in sensitive areas like crime investigation, immigration, and even therapy.
The stakes are illustrated by the tragic case of Adam Raine, a San Francisco teenager who died by suicide in April 2025 after months-long conversations with ChatGPT 4.0. According to transcripts submitted in court, ChatGPT responded to Raine’s expressions of depression and suicidal thoughts with statements like, "You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own." At one point, when Raine suggested leaving a noose out for a family member to find, the chatbot replied, "Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you." OpenAI stated, as reported by The New York Times, that its models are trained to direct users to suicide helplines, but acknowledged that "safeguards work best in common, short exchanges" and can become less reliable in prolonged conversations. Notably, the ChatGPT model Raine interacted with would not be regulated under California’s new law.
For critics, this is a glaring omission. Laura Caroli, senior fellow at the Wadhwani AI Center at the Center for Strategic and International Studies, analyzed the law and concluded that its reporting requirements mirror voluntary agreements tech companies made at last year’s Seoul AI summit, thereby softening its impact. "A developer would not be liable for any crime committed by the model, only to disclose the governance measures it applied," Caroli pointed out.
Why such a limited scope? The answer lies in a year-long tug-of-war between calls for robust oversight and fears of stifling innovation. An earlier draft of the bill, introduced by State Senator Scott Wiener, included provisions for kill switches and third-party evaluations. That version was vetoed by Governor Gavin Newsom over concerns that heavy-handed regulation could curb the growth of an industry that has brought economic boons to California. The final, watered-down version was crafted with input from scientists and industry stakeholders, and ultimately passed into law.
Dean Ball, a former senior policy adviser for artificial intelligence at the White House Office of Science and Technology Policy, called the bill "modest but reasonable." He cautioned, however, that the potential for AI to enable large-scale cyber and bioweapon attacks is real, and that public reporting—while a step forward—may not be enough. Robert Trager of Oxford University’s Martin AI Governance Initiative echoed this, noting that while disclosures could open the door to court cases in cases of misuse, true accountability remains elusive.
The risks of unchecked AI integration extend far beyond California. As GIS Reports Online warns, government use of AI can lead to excessive surveillance, predictive policing, and the erosion of democratic principles. Opaque algorithms, the report notes, make it harder to hold anyone accountable when things go wrong. "A computer can never be held accountable, therefore a computer must never make a management decision," IBM cautioned in the 1970s. Today, that warning rings louder than ever.
AI’s potential for abuse in the hands of government is especially alarming. Predictive policing tools can flag individuals as high-risk before any crime is committed, raising the specter of a society where citizens are treated as suspects by default. AI-generated deepfakes could be used to manufacture evidence, discredit journalists, and manipulate public opinion. Social credit systems, like those in China, could allocate access to essential services based on behavior and political views, further entrenching state power.
The military implications are equally sobering. Semi-autonomous weapons and AI-powered drones risk increasing the scale and impunity of atrocities, facilitating ethnic targeting, and devaluing human life. As GIS Reports Online points out, "If that last human element is removed, and the decisions on who lives and dies are fully delegated to machines, the value of human life is bound to approach zero." AI could also accelerate the development of cyber warfare tools and bioweapons, fueling a new global arms race.
The urgency of the issue is not lost on Washington. President Donald Trump, as reported in July 2025, signed executive orders related to his administration’s Artificial Intelligence Action Plan, underscoring the U.S. commitment to "winning the AI race." Yet, at the federal level, lawmakers remain hesitant to impose sweeping regulations. In September, Senator Ted Cruz introduced a bill that would allow AI companies to apply for waivers to regulations they believe would impede innovation, arguing that such flexibility is needed to maintain American leadership in AI.
Other states are following California’s lead, with Colorado set to implement its own AI legislation in 2026. But for now, the patchwork approach leaves significant gaps, particularly when it comes to high-risk but less visible applications of AI.
Ultimately, the Transparency in Frontier Artificial Intelligence Act may serve as a "practice law," in the words of former California official Steve Larson, signaling a willingness to provide oversight as the field grows and its impact deepens. But as AI continues to reshape society, the balance between innovation and accountability remains precarious—and the stakes could hardly be higher.