California and New York are on the cusp of making history as the first U.S. states to introduce sweeping laws aimed at reining in the potential dangers of frontier artificial intelligence (AI) models. With mounting concerns about the risks posed by advanced AI systems, lawmakers in both states have set their sights on new regulatory frameworks designed to prevent catastrophic harm—think dozens of casualties or billions of dollars in damages—caused by the most powerful AI technologies yet developed.
In California, the legislative spotlight shines brightly on Senate Bill 53 (SB 53), a measure that has already cleared both the state house and senate and is now poised for a final decision on Governor Gavin Newsom’s desk. If signed into law, SB 53 would require the largest AI developers—those building the so-called “frontier models” at the cutting edge of the field—to adopt rigorous transparency and safety frameworks. This includes publishing detailed risk management frameworks, issuing transparency reports, and declaring safety incidents to state authorities. The bill also mandates whistleblower protections and threatens monetary penalties for companies that fail to live up to their commitments.
Frontier AI models, such as OpenAI’s much-anticipated GPT-5 and Google’s Gemini Ultra, have become household names in the tech world. These systems are capable of astonishing feats, processing vast troves of data and performing tasks that, just a few years ago, would have seemed like science fiction. But with great power comes great responsibility—and, increasingly, great risk. The specter of AI models being misused to create large-scale weapons systems, orchestrate cyberattacks, or even commit criminal acts has prompted lawmakers to act before disaster strikes.
According to Stateline, California’s bill would require large developers to implement and publicly disclose the safety protocols they use to mitigate risks, specifically those that could lead to incidents causing 50 or more deaths or over $1 billion in damages. Developers would also need to create a frontier AI framework outlining best practices and publish transparency reports detailing the risk assessments conducted during model development.
New York, meanwhile, is hot on California’s heels. State lawmakers there approved a similar measure in June, with Democratic Governor Kathy Hochul holding the final say as the year draws to a close. New York’s bill ups the ante: it compels developers to adopt safety policies designed to prevent critical harm—including the death or serious injury of more than 100 people or at least $1 billion in damages—caused by frontier AI models, whether through weaponization or criminal misuse.
These legislative moves come after a period of intense debate and reflection. Just last year, California Governor Gavin Newsom vetoed a stricter AI regulation bill, warning that it would apply “stringent standards to even the most basic functions” of large AI systems and might stifle innovation. He also cautioned that small models could be “equally or even more dangerous.” In the wake of that veto, the Joint California Policy Working Group on AI Frontier Models spent the following year crafting a report that stressed the importance of empirical research, policy analysis, and finding a balance between technological benefits and risks.
SB 53, which is seen as a more measured successor to last year’s failed bill, has nonetheless sparked fierce opposition from some quarters of the tech industry. Paul Lekas, senior vice president of global public policy at the Software & Information Industry Association, criticized the measure in a statement to Stateline, arguing, “The bill remains untethered to measurable standards, and its vague disclosure and reporting mandates create a new layer of operational burdens.” Lekas contended that the legislation risks stifling innovation without meaningfully improving safety—a refrain echoed by industry groups in both California and New York.
NetChoice, a trade association representing digital giants like Amazon, Google, and Meta, weighed in as well, sending a letter to Governor Hochul in June urging her to veto New York’s bill. Patrick Hedger, NetChoice’s director of policy, warned, “While the goal of ensuring the safe development of artificial intelligence is laudable, this legislation is constructed in a way that would unfortunately undermine its very purpose, harming innovation, economic competitiveness, and the development of solutions to some of our most pressing problems, without effectively improving public safety.”
Yet, not all voices in the tech sector are opposed. In a significant development on September 10, 2025, Anthropic—a leading AI lab—formally endorsed California’s SB 53. The company described the bill as a “trust but verify” approach to AI safety, signaling a willingness to support state-level oversight even as it continues to advocate for federal regulation. Anthropic’s endorsement is seen as a rare win for the legislation, which has faced strong resistance from groups such as the Consumer Technology Association and Chamber for Progress.
According to TIME, Anthropic’s support lends momentum to SB 53 as it nears a final Senate vote. The bill’s requirements—mandating safety reports, risk frameworks, and whistleblower protections to address catastrophic misuse scenarios, including bioweapons and large-scale cyberattacks—are viewed by supporters as essential guardrails for a technology that is still rapidly evolving.
But why all the urgency? Recent research by NewsGuard Technologies has shed light on the vulnerabilities of modern AI systems. Over the past year, as chatbots have gained the ability to scour the internet for answers, their likelihood of parroting false information—including Russian disinformation—has increased, NewsGuard claims. In their study, six out of ten leading AI models repeated a false claim about the Moldovan Parliament speaker, demonstrating how easily malign actors can influence AI outputs by seeding the web with misleading content. NewsGuard asserts that chatbots now repeat false information more than one third of the time, up from 18% a year ago—though critics argue that the study’s small sample size may exaggerate the problem.
The risk isn’t limited to misinformation. Researchers at Palisade have demonstrated that autonomous AI agents can be weaponized, showing how a compromised USB cable could deliver an AI agent capable of sifting through a victim’s files to identify and steal valuable information. This proof-of-concept illustrates the scalability of AI-driven hacking, potentially exposing far more people to scams, extortion, or data theft than ever before.
Against this backdrop, the push for regulation is gaining traction. SB 53 and its New York counterpart are designed to force transparency and accountability on the part of AI developers—no easy feat, given the breakneck pace of AI innovation and the complex interplay of technological, economic, and legal factors. Supporters argue that without such measures, the public remains at risk from both intentional misuse and unintended consequences of frontier AI models.
Still, the debate is far from settled. Industry groups warn that overly prescriptive rules could hamstring America’s competitive edge in AI, while consumer advocates and some technologists insist that robust safeguards are overdue. The bills’ fate now rests with the governors of California and New York, whose decisions could set a precedent for the rest of the country—and perhaps the world.
As the AI landscape continues to shift, one thing is clear: the conversation about safety, transparency, and innovation is only just beginning. The outcome of these landmark bills may well shape the future of artificial intelligence for years to come.