
Sign in to join the discussion.
Anthropic filed two federal lawsuits on March 9 against the Department of War and more than a dozen other agencies after being designated a "supply chain risk" - a label previously reserved for foreign adversaries. The company's refusal to strip safety guardrails from Claude has set up a constitutional confrontation that cuts to the core of how the U.S. government treats its own AI industry.
In the absence of a comprehensive federal AI law, American states have spent three years building their own. The results are uneven, overlapping, and accelerating. In the first three months of 2026 alone, legislatures in Washington, Utah, Oregon, Virginia, and Florida have passed or nearly passed significant AI-related measures - covering everything from chatbot safety protocols to deepfake liability to human oversight requirements in health insurance. At least 78 chatbot bills have been introduced across state legislatures this year alone.[1]
The burst of activity is not coincidence. It is the predictable consequence of congressional paralysis. With no federal floor to stand on, states have acted as laboratories of democracy - and the experiments are starting to yield laws that real people and companies must comply with. But a new federal counteroffensive is now underway, and its success or failure will determine whether those experiments survive.
The most productive state session of early 2026 was Utah's. Despite running just under seven weeks - one of the shortest sessions in the country - Utah lawmakers sent nine AI-related bills to Gov. Spencer Cox before adjourning.[2] The package spans four domains: AI and digital devices in schools, deepfake protections, human oversight in health insurance authorization, and age verification for harmful online content.
The deepfake provisions are among the most substantive. HB 276 creates two distinct legal frameworks: the Digital Voyeurism Prevention Act, establishing civil liability for non-consensual intimate imagery and mandating notice-and-takedown procedures, and the Digital Content Provenance Standards Act, requiring large online platforms to detect, disclose, and preserve provenance data in distributed content.[2] SB 256 extends existing defamation law explicitly to AI-generated content, closing a gap that courts had been navigating inconsistently.[2]
Washington closed its session on March 12 with three AI bills given final approval in a single evening. HB 1170 requires AI operators to disclose to users when content has been developed or modified by AI.[3] HB 2225, a chatbot safety bill, imposes self-harm protocols and disclosure requirements on companion AI platforms and extends specific protections for minors.[3] A third bill, SB 5395, regulates the use of AI in health insurance decisions.[3]
Oregon passed SB 1546 in early March - the first chatbot bill to clear a state legislature in 2026. Sponsored by Sen. Lisa Reynolds, it requires companion AI platforms to disclose their non-human nature, establishes protocols for interactions involving suicidal ideation or self-harm, and includes a private right of action with statutory damages.[4]
Virginia sent three bills to Gov. Abigail Spanberger before its March 14 adjournment: HB 580 on AI fraud and abuse, HB 797 establishing a framework for independent verification organizations to assess AI systems, and SB 245 requiring social media platforms to exercise reasonable care to protect minors from heightened harm.[5]
Florida's story is the exception that proves the rule. Gov. Ron DeSantis's AI Bill of Rights - SB 482, which would create consumer rights around AI use, impose companion chatbot parental controls, and restrict government contracting with AI companies that sell personal data - passed the Senate 35-2 on March 4.[6] It then stalled in the House and appears unlikely to pass before the legislature adjourns, illustrating that even popular proposals face structural barriers in divided chambers.
State AI Legislation Tracker: March 2026 | |||
State | Bills Passed | Key Focus Areas | Status |
|---|---|---|---|
Utah | 9 | AI transparency, liability, consumer protection | Passed legislature, awaiting governor |
Virginia | 3 | Algorithmic accountability, deepfakes, hiring bias | Passed legislature, awaiting governor |
Washington | 3 | AI safety standards, automated decision systems | Passed legislature, awaiting governor |
Florida | 1 | AI Bill of Rights, transparency requirements | Passed Senate 35-2, stalled in House |
Texas | 1 | Frontier model oversight, incident reporting | In committee |
California | 0 | Broad AI safety (SB 1047) | Vetoed 2024; new bills in early drafting |
The Trump administration watched the state-level surge and decided to stop it. On December 11, 2025, President Trump signed Executive Order 14365, "Ensuring a National Policy Framework for Artificial Intelligence," declaring it U.S. policy to achieve "global AI dominance through a minimally burdensome national policy framework for AI."[7] The order did not itself invalidate any state law. Instead, it set a sequenced process in motion.
On January 9, 2026, Attorney General Pam Bondi established the Department of Justice's AI Litigation Task Force, directing it to challenge state AI laws "on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful."[8] The task force has not yet filed any lawsuits - the executive order contemplates that the Commerce Department must first identify specific targets.
That identification deadline was March 11, 2026. The Commerce Secretary was required to publish an assessment of state AI laws deemed "onerous," with particular focus on laws that "require AI models to alter their truthful outputs" or compel disclosures that may implicate First Amendment concerns.[9] The executive order specifically names Colorado's AI Act - which requires reasonable care to prevent "algorithmic discrimination" in high-risk AI systems and is scheduled to take effect June 30, 2026 - as the initial target, but the potential scope is far wider.[9]
A separate March 11 deadline required the FTC to issue a policy statement on whether its Section 5 authority over unfair and deceptive practices preempts state laws that require AI developers to adjust model outputs to mitigate bias - a theory legal analysts have described as untested.[9]
Together, the Commerce report and FTC statement are designed to feed a third prong: DOJ litigation. Commerce identifies targets, FTC reframes the legal authority, and DOJ challenges in federal court. The three-pronged structure is explicit in the executive order's design.[9]
The administration's preemption theory faces significant legal obstacles. Express preemption - in which federal law explicitly displaces state law - is not available here, because no comprehensive federal AI statute exists. The more viable path is conflict preemption: arguing that specific state laws stand as an obstacle to federal AI objectives. But courts scrutinize conflict preemption claims carefully, and AI litigation at the appellate level routinely takes two to four years to resolve.[9]
Critically, state laws already in effect remain fully enforceable until a court grants an injunction. California's AI transparency requirements, Texas's AI governance framework, and every bill signed this session continue to apply to companies operating in those states regardless of what the DOJ task force files. The compliance picture for companies is therefore more uncertain, not less: the question is no longer "which of fifty state laws applies to me?" but "which of my existing obligations might be voided, and on what timeline?"
The administration has one additional lever beyond litigation: the BEAD program. The executive order conditions approximately $21 billion in remaining broadband infrastructure funds on states not maintaining "onerous" AI laws - a financial pressure that could prompt some states to self-moderate even before federal courts weigh in.[9]
The federalism argument the administration is making has legitimate substance. AI systems are inherently interstate - a model trained in one state, deployed on servers in another, used by consumers in a third does not map cleanly onto state regulatory boxes. Conflicting compliance requirements across fifty jurisdictions raise real costs, particularly for smaller companies and startups that lack the legal infrastructure to track the divergence.
But the counter-argument is equally serious: states filled the regulatory vacuum precisely because Congress failed to act. The Colorado AI Act, the most comprehensive state law and the administration's named target, was developed over multiple years with extensive industry and civil society input. Preempting it without replacing it with equivalent federal protections does not solve the underlying problem - it removes existing guardrails without building new ones.
That is the core tension. The administration is not proposing a federal AI law to replace the state patchwork. It is proposing to eliminate the patchwork and leave the field largely unregulated. Whether that outcome serves the public interest - or simply serves the industry groups that lobbied against state-level accountability requirements - is the political question at the center of the next several years of AI governance.
Troutman Pepper Locke: Proposed State AI Law Update: March 9, 2026 (78 chatbot bills introduced across state legislatures) ↗
Transparency Coalition: At session's end, Utah legislators send nine AI bills to governor's desk ↗
Transparency Coalition: AI Legislative Update, March 13, 2026 ↗
Transparency Coalition: Oregon lawmakers pass major chatbot bill ↗
Transparency Coalition: AI Legislative Update, March 13, 2026 ↗
Transparency Coalition: AI Legislative Update, March 6, 2026 ↗
Executive Order 14365: Ensuring a National Policy Framework for Artificial Intelligence, The White House (Dec. 11, 2025) ↗
DOJ Memorandum: AI Litigation Task Force, Attorney General Pam Bondi (Jan. 9, 2026) ↗
Baker Botts: March 2026 Federal Deadlines That Will Reshape the AI Regulatory Landscape ↗