
Sign in to join the discussion.
On March 20, the Trump White House released a document that reads less like a policy blueprint and more like a formal notice to the states: your jurisdiction over artificial intelligence is ending. The National Policy Framework for Artificial Intelligence instructs Congress to preempt any state AI law that imposes what the administration calls an "undue burden," replacing fifty regulatory regimes with a single, federally defined minimum standard.[1] The framework lands after more than a year of escalating federal pressure on state-level AI governance - and it reveals an administration that has moved from signals to demands.
The new framework is not a sudden pivot. It is the third and most concrete act of a deliberate campaign. The Trump administration first made its position clear in July 2025, when its AI Action Plan called state regulatory regimes a source of dangerous "fragmentation."[2] Then in December 2025, Trump signed an executive order establishing a DOJ AI Litigation Task Force to challenge state AI laws in federal court on Dormant Commerce Clause grounds, and directed the Commerce Department to publish a comprehensive review of state AI laws by March 11 of this year.[2] The March 20 framework is the direct legislative follow-through that order anticipated: a set of draft principles that Sacks and White House science advisor Michael Kratsios are now expected to turn into actual legislation.
What makes this escalation significant is the tool being deployed. Executive orders cannot preempt state law - only Congress or the courts can do that.[2] By releasing a formal legislative blueprint, the administration is acknowledging the limits of executive action and placing its bet on Capitol Hill instead.
The document is organized around seven pillars. The first six cover child safety, community protection, intellectual property, free speech, innovation, and workforce development. The seventh - and operative - section calls on Congress to establish a federal preemption framework. It calls on Congress to prohibit states from regulating AI development on the grounds that it is "an inherently interstate phenomenon with key foreign policy and national security implications," to bar states from penalizing AI developers for third-party misuse of their models, and to prevent state laws from burdening AI-assisted activity that would be legal without AI involvement.[1]
The administration is careful to carve out exceptions: states may retain authority over child safety enforcement, local zoning of data centers, and their own procurement of AI for public services like law enforcement and education.[1] Those carve-outs are tactically designed. They map almost precisely onto the bipartisan concerns that have historically united AI-skeptical Republicans - Sen. Marsha Blackburn chief among them - with Democrats wary of industry capture. Whether they are sufficient to hold a coalition together in Congress is a separate question.
On intellectual property, the framework stakes out a notable position: the administration "believes that training of AI models on copyrighted material does not violate copyright laws" but declines to seek a legislative resolution, explicitly leaving the question to the courts.[1] That framing was welcomed by AI Progress, the industry coalition including Amazon, Anthropic, Cohere, Google, Meta, Microsoft, Midjourney, and OpenAI.[3] It is also, conveniently, a way of avoiding a vote on the single most contentious question in AI governance without appearing to dodge it.
If enacted, the framework would effectively nullify the most ambitious state-level AI legislation passed to date. Colorado's AI Act - which requires impact assessments and transparency from developers of high-risk AI systems, and which Sacks explicitly called "probably the most excessive" - would be among the first casualties.[2] California's AB 2013, requiring training data disclosures, and New York City's Local Law 144 on algorithmic hiring tools would both be at risk.[2] The December executive order had already signaled these as targets; the legislative framework would make their elimination permanent rather than dependent on litigation outcomes.
The practical effect on the AI industry would be substantial. Rather than building compliance infrastructure for Colorado's February 2026 requirements, California's disclosure rules, and dozens of emerging state frameworks simultaneously, developers would face a single federal baseline - one written by an administration that has explicitly committed to a "minimally burdensome" standard.[1] The White House is not merely arbitrating between regulatory regimes. It is choosing the most permissive one and nationalizing it.
House Republican leaders endorsed the framework almost immediately, pledging to work "across the aisle" on legislation.[3] The Senate is more complicated. Blackburn, whose support would be essential in a chamber with no votes to spare, called the framework a "roadmap" while carefully reserving the right to strengthen it.[3] Democrats have been sharper: Rep. Josh Gottheimer of New Jersey said the framework "fails to address key issues, including strong accountability for AI companies, under the guise of protecting children, communities, and creators," and warned that Americans "need protection" from a sector that would become "the Wild West" under federal minimalism.[3]
There is also a structural problem that even supporters acknowledge: this is a midterm election year. Passing sweeping federal legislation in that window requires a level of bipartisan momentum that the framework, as released, has not yet generated. Its child-safety provisions are designed to be the tent-pole - the provision that makes the broader package politically viable - but safety advocates note that the document says nothing about the risks they consider most severe: autonomous AI agents operating without meaningful human oversight, or the large-scale displacement of workers.[3]
Beneath the policy mechanics, the White House is making a specific wager: that the regulatory risk to American AI competitiveness is greater than the risk of under-regulating a transformative technology. That is a defensible position in the abstract, but it requires trusting that industry-led standards and sector-specific agencies - the framework explicitly opposes creating any new AI regulatory body[1] - will catch the problems that state-level experimentation was beginning to surface.
Colorado delayed its AI Act's enforcement from February to June 2026; Utah quietly narrowed its own AI legislation in 2025.[2] The administration will cite those retreats as evidence that states themselves recognize the limits of their approach. Critics will argue they are the product of federal coercion, not regulatory wisdom. Both readings are partly accurate. The question the framework ultimately poses - who should govern AI, and toward what ends - is one that Congress has so far preferred not to answer. The Trump administration has just made it harder to postpone.
White House, National Policy Framework for Artificial Intelligence: Legislative Recommendations (March 20, 2026) Inline ↗
Paul Hastings LLP, "President Trump Signs Executive Order Challenging State AI Laws" (December 16, 2025) Inline ↗
PBS NewsHour / Associated Press, "White House urges Congress to take a light touch on AI regulations in new legislative blueprint" (March 20, 2026) Inline ↗