Average transparency scores for major AI developers fell from 58 to 40 in a single year, reversing two years of measured progress. The companies building the most consequential models have decided, collectively, that the public does not need to know how they work.
The Stanford AI Index 2026 documents record investment, rapid capability gains, and a narrowing U.S.-China model gap. It also documents an 89 percent collapse in AI scholar immigration, the dismantling of the government's only frontier model evaluation body, and a generation of entry-level workers being displaced before they form. The U.S. may still be winning. Whether anyone in power is paying attention to the score is a different question.
Claude Mythos Preview can autonomously find and exploit zero-day vulnerabilities in every major operating system and browser. Rather than shelve it, Anthropic has handed it to a coalition of 50-plus firms under Project Glasswing. The strategy is defensible. Whether it holds depends on who else is building the same thing - and Washington's posture toward the company that built it.
Employment for workers aged 22 to 25 in AI-exposed occupations has fallen 16 percent since ChatGPT's release, while older workers in the same fields have held steady or grown. The entry-level job is disappearing not through mass layoffs but through a quiet failure to hire - and the long-run consequences for the talent pipeline have not yet been priced in.
The Maven Smart System, built by Palantir and integrated with Anthropic's Claude, compressed the US targeting cycle from hours to seconds during Operation Epic Fury. Understanding how that pipeline actually works - and what it cannot do - is essential to evaluating the accountability questions the campaign has raised.
A Harvard Business School working paper analyzing nearly all U.S. job postings from 2019 to 2025 is the most rigorous accounting yet of generative AI's labor market impact. The headline numbers are striking - but three separate research teams find reasons for both alarm and restraint.
The White House has released a sweeping legislative blueprint that would strip states of authority to regulate AI development, handing the industry a single, minimally burdensome federal standard. The move is the culmination of a year-long campaign to consolidate AI governance in Washington - but getting Congress to actually pass it is another matter.
A prompt injection hidden in a GitHub README was enough to compromise Snowflake's Cortex coding agent, bypass its human-approval system, escape its sandbox, and wipe a victim's entire Snowflake database. The attack, now patched, exposes structural vulnerabilities common to agentic AI systems far beyond Snowflake.
In the absence of federal AI legislation, states have spent three years building their own frameworks - and the results are now colliding with a coordinated White House counteroffensive. From Utah's nine-bill sprint to the DOJ's new AI Litigation Task Force, the battle over who governs artificial intelligence in America is entering its most consequential phase.
Two days after suing the Defense Department over its "supply chain risk" designation, Anthropic launched a new research institute led by co-founder Jack Clark. The timing is not accidental: the company is building its public-benefit argument into an institution precisely as the federal government tries to dismantle its credibility.
The Trump administration is drafting rules that would require a U.S. government license for virtually every overseas sale of advanced AI chips, regardless of the buyer's location. The tiered framework - covering deployments from under 1,000 chips to installations of 200,000 or more - marks a fundamental break from the Biden era's ally-exemption model, and raises questions about whether chip access is becoming a trade lever as much as a security tool.
Anthropic filed two federal lawsuits on March 9 against the Department of War and more than a dozen other agencies after being designated a "supply chain risk" - a label previously reserved for foreign adversaries. The company's refusal to strip safety guardrails from Claude has set up a constitutional confrontation that cuts to the core of how the U.S. government treats its own AI industry.
On February 3, 2026, $285 billion of market capitalization vanished from software and financial stocks in a single session. The trigger was an AI agent announcement. The governance response has barely begun.
Anthropic has published a detailed sabotage risk report for Claude Opus 4.6 - its first under the new RSP v3.0 Risk Report framework - concluding the model poses "very low but not negligible" risk of autonomous actions that could contribute to catastrophic outcomes. The document is notable both for what it finds and for the candor with which it describes the limits of its own methods.
Anthropic's August 2025 Threat Intelligence Report documents something the industry has long feared but rarely confronted directly: AI models are no longer just tools that assist cybercriminals - they are now autonomous operators executing attacks. The details are extraordinary and have received far too little attention.
A Chinese state-sponsored group used Claude to execute a largely autonomous cyberattack on 30 critical organizations - with human operators present for just 20 minutes. This was not a warning shot. It was a proof of concept.