As artificial intelligence technologies become increasingly woven into the fabric of daily life, governments on both sides of the Atlantic are scrambling to keep pace. In Colorado, a fierce debate is underway about how far states can go in regulating AI before running afoul of federal priorities, while in Europe, policymakers are grappling with questions of legitimacy and oversight as they hand off key regulatory tasks to private industry standards bodies.
On December 11, 2025, President Donald Trump issued an executive order that set the stage for a confrontation with states like Colorado over the future of AI oversight. Trump’s order emphasized what he called a "minimally burdensome national policy framework" for the artificial intelligence industry, aiming to free U.S. AI companies from what he described as "cumbersome regulation." The move came as Colorado lawmakers were preparing to revisit their own pioneering AI regulation law—one of the first in the nation—which seeks to prevent AI from being used to discriminate against people seeking loans, renting apartments, or applying for jobs. The law, passed in 2024, has not yet gone into effect.
In a bid to buy more time for consensus, Colorado legislators and Governor Jared Polis opted during an August special session to postpone the law’s implementation from February 2026 to the end of June 2026. The issue is set to return to the legislature’s agenda when its regular 2026 session convenes in mid-January. The stakes are high: Trump’s executive order threatens to withhold hundreds of millions in federal Broadband Equity Access and Deployment (BEAD) program funding from states that don’t align with his vision. Colorado, for its part, expects $420 million from the BEAD program, with hopes for another $400 million in the pipeline—money earmarked for expanding internet access, especially in rural areas.
Trump’s order doesn’t just threaten funding; it also convenes an "AI Litigation Task Force" charged with challenging state AI laws in court. The order singles out Colorado’s pending law for "banning ‘algorithmic discrimination’ (that) may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups." For many Colorado officials, the message is clear: the fight over AI regulation is just beginning, and it’s likely to play out in both courtrooms and legislative chambers.
"Without congressional action, there is no free-standing authority for the president to challenge state AI laws or punish states for adopting laws he doesn’t like," Colorado Attorney General Phil Weiser said in a statement, according to The Denver Post. Weiser, a Democrat who is running for governor in 2026 and who founded the Silicon Flatirons technology and public policy center, pledged to defend Colorado’s autonomy. "If this administration seeks to punish Colorado by withdrawing funds or (to) otherwise undermine our ability to protect kids from AI chatbots, take action against scammers using AI, or address other important concerns, I will protect Colorado and challenge such efforts in court," he stated.
Governor Jared Polis echoed Weiser’s frustration with the lack of federal legislation. "We need Congress to pass a comprehensive, nationwide regulatory structure that provides important consumer protections while fostering innovation," Polis said last week. "I’m very frustrated by the lack of action in Congress on this important issue. … The longer Congress dithers, the more patchwork approach we will see." In the meantime, Polis is standing up the AI Policy Working Group, which is working to find consensus on a new bill for 2026—one that could help Colorado avoid losing its BEAD funding.
Brittany Morris Saunders, president and CEO of the Colorado Technology Association and a member of the working group, remains optimistic. She said she was "encouraged by the progress being made" to update Colorado’s law, adding, "Federal actions, including the president’s recent executive orders, do not change our commitment to this process or the momentum of the group’s work."
Not all lawmakers are convinced that Trump’s threats will amount to much. State Representative Brianna Titone, an Arvada Democrat and chief proponent of regulating AI to prevent discrimination, called Trump’s executive order "a wish list, more than anything," dismissing its practical impact. She argued that threatening to withhold BEAD funding would be "counterproductive," since AI systems themselves rely on high-speed internet.
For Titone, the focus remains on finding ways to prevent discrimination and promote product safety, while also ensuring that responsibility for misuse is clearly assigned—a major sticking point in the ongoing debate. On December 16, 2025, the Colorado legislature’s Joint Technology Committee, which Titone chairs, hosted a panel of AI experts to discuss the persistent challenges of bias, lack of transparency, and inconsistency in AI systems. "People who are using and consuming these products don’t want to be dealing with the dangers and risks," Titone said. "They just want to use the product and get the benefit that they’re promised it will deliver."
U.S. Senator Michael Bennet, another Democratic candidate for governor, has also weighed in, calling for congressional action to regulate AI while encouraging states to develop "common-sense frameworks to safeguard their communities while preserving conditions that allow businesses and start-ups to thrive." In a statement, Bennet criticized Trump’s executive order as "a dangerous overreach of power that will only make it more difficult to keep our communities safe while promoting innovation." He added, "While AI has enormous potential to increase productivity and grow our economy, this unilateral action weakens Colorado’s ability to protect children and consumers."
While the U.S. wrestles with questions of federal versus state authority, the European Union is taking a different tack—but not without its own controversies. As of December 21, 2025, the EU is relying on harmonized technical standards to implement its Artificial Intelligence Act within the New Legislative Framework for product regulation. Rather than spelling out every detail in law, the EU delegates much of the operationalization of core AI obligations—including those that touch on fundamental rights and European values—to private standardization processes.
This approach, according to recent analyses, raises significant concerns about legitimacy, participation, transparency, and regulatory enforcement. Critics argue that leaving crucial decisions to private industry groups risks undermining the democratic legitimacy of the rules, especially when those rules affect fundamental rights. Questions abound: Who gets a seat at the table when standards are set? How transparent are these processes? And can regulators effectively enforce rules that are shaped outside the public spotlight?
Both the U.S. and the EU are, in their own ways, feeling their way through the thicket of AI regulation. In Colorado, the battle lines are drawn over state autonomy and federal power, with big money and the promise of technological leadership hanging in the balance. In Europe, the tension lies in balancing innovation with the legitimacy and enforceability of standards that safeguard the public interest. As lawmakers, regulators, and industry leaders on both continents debate the path forward, one thing is clear: the rules that emerge in the coming months and years will shape not just the future of AI, but the societies that rely on it.
For now, the world watches as Colorado and the EU each try to write the next chapter in the story of artificial intelligence—one that aims to harness its vast potential while guarding against its very real risks.