On March 9, Anthropic filed two federal lawsuits challenging the Pentagon's unprecedented decision to designate it a supply chain risk. On March 11, the company announced the Anthropic Institute. The two-day gap between those events is the most important thing to understand about both of them.
The Institute is real research infrastructure - not a press release dressed up as policy. It consolidates three existing internal teams: the Frontier Red Team, which probes the outer limits of Anthropic's own models for dangerous capabilities; Societal Impacts, which studies how AI is actually being used in the world; and Economic Research, which tracks AI's effect on labor markets and the broader economy.[1] These teams existed before. What's new is their consolidation under a single roof, their elevation to a named external-facing unit, and their mandate to publish findings for researchers and the public - not just inform internal decisions.
The Institute is led by Jack Clark, one of Anthropic's seven co-founders, who has taken a new title: Head of Public Benefit.[1] Clark was previously Anthropic's Head of Policy, and before that helped build OpenAI's communications and policy functions. His move into this role is a signal about where Anthropic believes the leverage points are. "AI progress continues to accelerate and the stakes are getting higher," Clark wrote on X, "so I've changed my role at Anthropic to spend more time creating information for the world about the challenges of powerful AI."[2]
The founding hires reflect the Institute's interdisciplinary ambitions. Matt Botvinick - a Resident Fellow at Yale Law School, former Senior Director of Research at Google DeepMind, and former professor of Neural Computation at Princeton - joins to lead research on AI and the rule of law.[1] Anton Korinek, a University of Virginia economics professor on leave, will lead the economic research track studying how transformative AI reshapes labor markets.[1] Zoë Hitzig, a researcher who previously studied AI's social and economic impacts at OpenAI, joins to bring economic analysis to model training and development.[3]
Botvinick's hire is the most pointed. The Institute is explicitly working on "how powerful AI will interact with the legal system" - and Anthropic is currently litigating in two federal courts. Whether or not that work is kept entirely separate from the lawsuits, the optics of hiring a Yale law fellow to lead AI-and-rule-of-law research during an active constitutional dispute are impossible to miss.
Sign in to join the discussion.
A single product announcement from Anthropic wiped $285 billion from software stocks in February 2026, exposing the structural vulnerability of the per-seat SaaS model to agentic AI. As markets reprice with characteristic speed, regulators in Singapore, Brussels, and Washington are only beginning to grapple with who is accountable when an autonomous agent causes harm.
The dispute with the Department of War began when Anthropic refused to waive the safeguards it had built into its July 2025 classified-network contract - specifically the prohibitions on mass domestic surveillance and fully autonomous weapons targeting.[4] The Pentagon responded by designating Anthropic a supply chain risk on March 3, the first such designation ever applied to an American company. President Trump had already directed all federal agencies to phase out Anthropic's technology.[4]
The designation is more than symbolic. Under DFARS 252.239-7018, any government contractor using a designated supply chain risk must actively mitigate that risk - which in practice means reconsidering or terminating use of Anthropic's products.[5] The commercial exposure is real. At a court hearing on March 10, Anthropic's attorney told U.S. District Judge Rita F. Lin that more than 100 enterprise customers had contacted the company to express doubt about continuing their work with Anthropic, and that a financial services firm had paused negotiations over a $50 million contract. Anthropic's CFO estimated that harm to 2026 revenue could range from hundreds of millions to billions of dollars.[6]
Against that backdrop, the Anthropic Institute is not merely a research initiative. It is Anthropic's argument, institutionalized. The company's position in the lawsuit is essentially that its safety restrictions are protected speech and legitimate policy judgment. The Institute is the standing infrastructure for making that argument to the public, to policymakers, and to courts - continuously, and with credentialed outside researchers attached to it.
The Institute announcement came packaged with a separate expansion of Anthropic's public policy team. Sarah Heck, who leads external affairs, will now also head the public policy function. Anthropic confirmed it is opening its first Washington, D.C. office this spring and expanding its global policy presence.[1]
The DC office is, again, not incidental timing. The company is defending itself in a D.C. federal court while simultaneously building the lobbying and research infrastructure needed to influence how Congress and regulators think about AI safety obligations. The Institute provides the intellectual content; the expanded policy team provides the delivery mechanism.
What Anthropic is wagering is that the credibility of its safety positions depends on being seen as a genuine research institution - not just a company with a terms-of-service and a PR department. The Institute's stated mission is candid disclosure: to tell the world what Anthropic is learning as it builds frontier systems, including findings that may be uncomfortable.[1]
"The Institute has a unique vantage point: it has access to information that only the builders of frontier AI systems possess. It will use this to its full advantage, reporting candidly about what we're learning about the shape of the technology we're making."[1]
That is a meaningful promise - and one that will be tested quickly. The Institute is also working on forecasting AI progress, a domain where Anthropic's predictions are inevitably self-serving given the company's commercial interest in the perception that transformative AI is imminent. How independently the Institute operates from Anthropic's business and legal strategy will determine whether it earns the credibility it is designed to project.
For now, what is clear is that Anthropic has chosen to fight on two fronts simultaneously: in federal court, and in the court of public opinion. The Pentagon fight forced the question of whether AI safety restrictions are a legitimate corporate value or a political liability. The Anthropic Institute is Anthropic's answer - staffed with credentialed outsiders, and headquartered, soon, in the city where the question will ultimately be decided.
Anthropic, "Introducing The Anthropic Institute," March 11, 2026 ↗
Anthropic, "Introducing The Anthropic Institute" - Zoë Hitzig hire detail, March 11, 2026 ↗
Mayer Brown, "Anthropic Supply Chain Risk Designation Takes Effect," March 10, 2026 ↗
Mayer Brown, DFARS 252.239-7018 supply chain risk contractor obligations, March 10, 2026 ↗
Bloomberg Law, "Anthropic Tells Judge Billions at Stake If US Shuns AI Tool," March 11, 2026 (citing March 10 court hearing, U.S. District Court, N.D. Cal.) ↗