On the morning of March 9, Anthropic filed two federal lawsuits against the Trump administration - one in the Northern District of California, one in the federal appeals court in Washington, D.C. - asking courts to declare unlawful and block the Pentagon's designation of the company as a "Supply-Chain Risk to National Security."[1] The move was the culmination of a weeks-long standoff that has produced one of the most consequential legal confrontations in the short history of AI governance: a sitting U.S. administration using national security law against a domestic company to punish it for the content of its published safety policy.
The dispute is, at its core, a First Amendment case dressed in the language of national security. And embedded within it is a paradox that neither side has been eager to explain: the Pentagon moved to designate Anthropic a threat to the national supply chain while simultaneously deploying Claude - Anthropic's flagship model - in active military operations against Iran.[2]
Anthropic's relationship with the Pentagon had, until recently, been mutually beneficial. In July 2025, the company signed a $200 million contract with the Department of War, and Claude became, reportedly, the only frontier AI model operating on the department's classified systems.[3] Through its partnership with Palantir Technologies, Claude was integrated into the Maven Smart System, described by Bloomberg as the Pentagon's most sophisticated AI-enabled targeting platform, and was used to identify targets in the planning of military strikes on Iran.[4]
Anthropic's usage policy had always contained two restrictions: Claude would not be used to enable lethal autonomous weapons systems operating without human oversight, and it would not be used for mass surveillance of American citizens.[1] The company says the Pentagon had, until recently, accepted those terms.
That changed on February 24, when Secretary of War Pete Hegseth met with Anthropic CEO Dario Amodei and delivered a formal ultimatum: remove all usage restrictions and agree to a policy permitting "all lawful use" of the technology - with no carve-outs. The Pentagon sent what it described as its "last and final offer" and demanded a response by 5:01 p.m. on February 27.[5] Amodei refused, stating that the company "cannot in good conscience" agree, and that the Pentagon had made "virtually no progress" on preventing Claude's use for mass surveillance or fully autonomous weapons.[3]
President Trump responded by ordering every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology." Hours later, Hegseth designated Anthropic a supply-chain risk and directed that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." A six-month transition period was granted to the Pentagon itself.
Sign in to join the discussion.
Then, in less than 24 hours, the Department used Claude to conduct strikes on Iran.[2]
The 48-page complaint, filed in San Francisco, advances three distinct legal claims. First, that the designation constitutes unlawful retaliation for protected speech under the First Amendment: Anthropic's usage policy is an expression of its views on AI safety, and the government cannot weaponize its regulatory power to suppress that expression.[1] "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the complaint states directly.[1]
Second, the suit argues the actions exceed the Pentagon's statutory authority - that "no federal statute authorizes the actions taken here."[6] Supply-chain risk designations are a tool created by Congress for use against foreign adversaries and companies with foreign adversary ties. Applying them to a domestic company with no such ties, the complaint argues, is an improvised use of law that has no legislative basis.
Third, Anthropic claims a violation of Fifth Amendment due process: the designation was made without adequate notice, without a hearing, and without any meaningful opportunity to respond before the economic damage was inflicted.[1]
The government has countered that private contractors cannot dictate the terms on which the military uses technology in wartime, and that all proposed uses would be "lawful."[7] That framing, notably, sidesteps the First Amendment question entirely.
The supply-chain risk label has historically been reserved for companies from foreign adversary nations. Huawei is the paradigm case. Applying it to Anthropic makes the company the first domestic U.S. firm ever to receive that designation publicly.[7]
The consequences extend far beyond the Pentagon. The designation requires every defense contractor, supplier, and partner to certify that they are not using Anthropic's models. For a company whose technology is embedded across commercial and government applications, the practical effect is a sweeping economic embargo - enforced not by Anthropic's competitors winning on merit, but by the state removing the company from the market by decree.
Dean Ball, a senior fellow at the Foundation for American Innovation and former White House AI policy adviser who helped write the Trump administration's AI Action Plan in 2025, was unusually direct in his condemnation. Ball described the designation as "attempted corporate murder" and said that Hegseth had "announced a desire to kill Anthropic." Writing on Substack before the lawsuits were filed, he warned that "even if Hegseth backs down from his threats against Anthropic, great damage has been done," and drew explicit comparisons to the business environment in China - noting, pointedly, that DeepSeek had not been designated a supply-chain risk while a domestic American firm had been.[8]
More than three dozen AI professionals from OpenAI and Google, including Google DeepMind Chief Scientist Jeff Dean, filed an amicus brief with the court on the day the lawsuits were submitted, arguing the designation poses an existential threat to the U.S. AI industry's willingness to engage with the government.[7]
The most striking feature of this confrontation is the contradiction the government has yet to explain. Claude is, by the Pentagon's own account, its most widely deployed frontier AI model and the only frontier model on its classified systems.[1] It is reportedly being used - today - in the most operationally significant military missions the U.S. is running. The Pentagon has praised Claude's capabilities as "exquisite."[1]
And yet the same department that cannot apparently operate without Claude has declared the company that built it a threat to national security.
The logical tension is severe. Either Claude is safe and capable enough to guide lethal military operations - in which case the "supply chain risk" designation is a political act, not a security judgment - or the Pentagon is knowingly running a national security risk in live combat by relying on a vendor it has officially blacklisted. Neither reading reflects well on the administration's stated rationale.
What the facts suggest is something simpler: the designation was a coercive instrument, not a considered security assessment. The government wanted Anthropic to capitulate. Anthropic did not. The "supply chain risk" label was the price.
Anthropic is seeking declaratory and injunctive relief - meaning it wants a court to formally declare the designation unlawful and issue an order blocking its enforcement. The First Amendment retaliation claim is the strongest ground, but also the most novel: courts have not squarely addressed whether a company's published usage policy constitutes protected speech that the government cannot punish through regulatory action of this kind.
The case will test whether the judiciary is willing to scrutinize the national security justifications of an administration that has shown a consistent pattern of deploying such justifications against domestic political opponents. It will also determine whether the supply-chain risk statute - a tool designed for the Huaweis of the world - can be stretched to cover a San Francisco AI company whose crime was publishing a terms-of-service document the Secretary of War found inconvenient.
Whatever the outcome, the chilling effect Ball warned about is already underway. Every AI company with government contracts now understands the implicit message: maintain safety policies the government dislikes, and the machinery of national security law may be turned against you. That is a signal with consequences that will outlast any court ruling.