As the Trump administration moves to roll back federal regulations on artificial intelligence (AI), states across the U.S. are stepping up to safeguard their residents by drafting and implementing their own policies on this rapidly evolving technology. The effort comes after President Trump signed an executive order revoking Biden-era initiatives aimed at ensuring the safe use of AI, including guidelines to mitigate risks associated with AI bias and civil rights violations.
During 2024, a notable 31 states adopted resolutions or enacted legislation related to artificial intelligence, and 2025 witnessed nearly every state introduce AI-related legislative proposals. Colorado, in a pioneering move, became the first state to establish comprehensive regulations for AI in 2024, setting a precedent that others are now following.
This year, Virginia followed suit, passing comprehensive anti-discrimination legislation meant to hold companies accountable for biases in areas such as hiring, housing, and healthcare. If signed into law by Republican Governor Glenn Youngkin, this legislation is slated to take effect in July 2026 and will require businesses using high-risk AI systems to assess risks and clearly document how they intend to deploy these technologies.
Amidst ongoing debates about the implications of AI, many states are also focusing on curbing the proliferation of deepfakes—digitally manipulated images and videos that can mislead public perception, particularly during elections. States like Montana and South Dakota are working on legal frameworks to discourage the use of political deepfakes, while others, such as Hawaii and New Mexico, are drafting measures with civil and criminal penalties for the distribution of non-consensual explicit deepfake media.
Legislators in various states, including California and Arkansas, have also introduced bills to regulate the role of AI in healthcare and insurance decisions. Recently, the Utah legislature approved a bill aimed at protecting mental health patients from potentially harmful interactions with AI chatbots, marking a significant step towards ensuring ethical AI deployment in sensitive healthcare contexts.
Serena Oduro, a senior policy analyst at Data & Society, expressed concerns about the necessity for states to take more control over AI regulation due to the diminishing federal oversight. "If we continue with the road that Trump is on, I think states will have to step up because they’re going to need to protect their constituents," Oduro remarked, noting the widespread fear surrounding unregulated AI technology.
Legislators are not just concerned about consumer protection; they are also motivated by the promise of innovation that AI can bring. In Washington state, Republican Representative Michael Keaton has introduced legislation aimed at promoting small businesses that leverage AI for impactful statewide projects, including advancements in wildfire tracking and cybersecurity. Earlier this month, the Washington state House approved the grant program, which emphasizes ethical AI use and mandates risk assessments from applicants. Keaton emphasized the dual focus on innovation and responsibility, stating, "We’re driving for innovation and we’re trying to get the monies appropriated to be able to take advantage of that innovation, but we want to do it in a smart way."
The rush to regulate AI has sparked a complex and fragmented landscape of laws across the states, posing challenges for AI developers trying to navigate these varying requirements. Paul Lekas, senior vice president and head of global public policy and government affairs at the Software & Information Industry Association, pointed out, "I think the industry is struggling to figure out how to comply with all of these laws were they to pass," highlighting the difficulties posed by differing state definitions of AI and the unique rules imposed upon developers, distributors, and consumers alike.
The patchwork of legislation could force developers into a tricky position: while seeking to foster innovation, they must also contend with a mosaic of compliance obligations that could stifle their projects if not navigated carefully. The national conversation around AI and its implications is developing rapidly, fueled by public awareness and the increasing accessibility of generative AI tools, such as OpenAI's ChatGPT, which have brought the topic into everyday discourse.
Overall, as states continue to forge ahead with their respective AI policies, the balance between innovation and regulation will shape the future of artificial intelligence in America. The outcome of these state-level initiatives will not only influence how AI is integrated into various sectors but also determine the trajectory of consumer rights and ethical standards in a technology that is poised to be deeply intertwined with everyday life.