President Donald Trump is poised to reshape the regulatory landscape for artificial intelligence (AI) in the United States with an executive order that would establish a single, nationwide rulebook, overriding the patchwork of state-level laws currently in place. The move, announced in a series of posts on Truth Social on December 9, 2025, and reiterated at a business roundtable the following day, has ignited a heated debate across political, industry, and advocacy circles about the future of AI oversight, states’ rights, and the balance between innovation and consumer protection.
Trump’s rationale is clear: he believes that allowing all 50 states to set their own AI rules and approval processes would stifle innovation and jeopardize America’s global lead in artificial intelligence. "There must be only One Rulebook if we are going to continue to lead in AI. We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS. THERE CAN BE NO DOUBT ABOUT THIS!" Trump declared on Truth Social, warning, "AI WILL BE DESTROYED IN ITS INFANCY!"
According to Reuters, the White House has been mulling this executive order for weeks, considering not only legal challenges against state laws but also the possibility of withholding federal funding from states that refuse to comply. The executive order is expected to be issued during the week of December 9, 2025, after legislative efforts to preempt state-level AI regulations—such as those tied to the "One Big, Beautiful Bill" and the National Defense Authorization Act—failed to pass Congress, as reported by The Daily Signal.
Trump’s administration frames the move as an essential step for American competitiveness. During the roundtable, Trump emphasized, "We’re leading artificial intelligence by a lot. We’re writing rules and regulations. We want to keep it at the federal level so it’s simple for the companies." He pointed to America’s leadership in building data centers and chip manufacturing facilities as evidence of the nation’s dominant position in AI infrastructure.
Industry leaders and analysts are split on the implications. Lisa Martin, a technology analyst at Lisa Martin Media, told Cybernews that a national rulebook could enhance oversight by creating consistent standards for safety, transparency, and accountability. "A strong federal framework could set a baseline, giving every user, regardless of location, the same rights and protections," she said. Michael Bell, founder and CEO of Suzu Labs, echoed this sentiment, arguing that a unified standard would eliminate the friction caused by companies having to navigate 50 different state rulebooks. "Companies can build once and deploy everywhere. For users, this means faster access to AI-powered products and clearer expectations about how those tools work," Bell said.
But not everyone is convinced. Critics, including some state officials and policy experts, warn that the executive order could undermine robust state-level protections, especially those aimed at safeguarding children and vulnerable groups. Several states, such as Texas and Colorado, have passed or are poised to enact laws addressing issues like algorithmic discrimination, deepfakes, and the use of chatbots by minors. Texas, in particular, has enacted the Securing Children Online through Parental Empowerment (SCOPE) Act and the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), both designed to shield minors from harmful content and synthetic pornography.
David Dunmoyer, associate vice president of campaigns at the Texas Public Policy Foundation, described the state’s approach as the result of extensive stakeholder engagement. "We had an AI council. We heard from government agencies and how they’re using AI. We had multiple hearings in the [state] house, in the [state] senate, hearing from the private sector, hearing from industry," Dunmoyer told The Daily Signal. "On top of that, we had a year plus long stakeholder process, specifically for that one bill, where we brought in over 250 industry leaders to hash out and build consensus around what a responsible AI bill would look like."
Daniel Cochrane, senior research associate at the Center for Technology and the Human Person at The Heritage Foundation, highlighted the role of states as "Americans’ first line of defense against Big Tech." He noted, "While Congress dithers—failing year after year to enact commonsense standards to protect kids from predatory social media platforms, secure privacy, and address emerging risks from generative AI—states are filling the void." Cochrane recommended a cooperative approach: "Both the federal government and states need to work together. That will require a transparent, deliberative process to ensure AI promotes flourishing for all Americans."
However, the Trump administration’s proposed order would shift much of the regulatory power to federal agencies like the Department of Justice and the Federal Trade Commission, potentially at the expense of more stringent state laws. David Sacks, the White House AI czar, has publicly downplayed concerns, stating that the national AI rule would not affect generally applicable state laws regarding child safety, local infrastructure, or federal copyright law. He dismissed some state laws on algorithmic discrimination as "ideological meddling," urging AI models to "strive for the truth and be ideologically unbiased."
The political battle lines are sharply drawn. Some Republicans, including Florida Governor Ron DeSantis and Representative Marjorie Taylor Greene, have voiced strong opposition to federal preemption. DeSantis argued on X that Congress’s approach amounted to "an AI amnesty" that would block states from acting for a decade. Greene insisted, "There should not be a moratorium on states rights for AI. States must retain the right to regulate and make laws on AI and anything else for the benefit of their state. Federalism must be preserved."
Legal experts like Adnan Masood, chief AI architect at UST, warn that an executive order alone cannot permanently displace state authority, predicting a prolonged period of litigation. "When Washington centralizes control without guardrails, the loudest and largest industry voices tend to dominate – an outcome that rarely favors smaller innovators or user protections," Masood told Cybernews. Andrew Gamino-Cheong, CTO and co-founder of Trustible, cautioned that the national AI rule could create short-term uncertainty for users, as lawsuits and appeals over the order’s legality are likely to stretch on for years.
Policy experts remain divided on the merits of a federal moratorium. Ted Bolema, senior fellow with the Mackinac Center for Public Policy, supports the move, arguing that a unified framework would "avoid the patchwork problem and make it possible for AI innovators to innovate with a clearer set of rules." Conversely, Lance Christensen, vice president of government affairs and education policy at the California Policy Center, called for a baseline federal framework that still allows states to tailor laws to their unique needs, warning that otherwise, "major AI corporations could harness particular states’ regulations to crush their competition in other states."
As the executive order looms, the debate over AI regulation in the United States has become a microcosm of broader tensions between federal authority, states’ rights, and the powerful interests of the tech industry. Whether the national rulebook will foster innovation or erode critical safeguards remains to be seen, but one thing is certain: the coming months will be pivotal for the future of AI governance in America.