Daily Signal · 2026-05-08
The Daily Signal — May 8, 2026
Trial bombshells, enterprise land grabs, and a research warning that AI-automated alignment might make safety work less safe
The Musk v. Altman trial dominated the week with testimony that turned OpenAI's founding mythology inside out. Elsewhere, the two leading AI labs raced each other into enterprise services, Meta's countdown to its largest 2026 layoff wave ticked closer, and two notable papers landed on arXiv - one showing AI accelerating hard mathematics, the other warning that using AI to automate safety research could quietly make things worse.
Industry
Brockman's testimony puts a number on OpenAI's compute bill: $50B in 2026, up from $30M in 2017
Greg Brockman's second day on the stand in the Musk v. Altman trial produced the week's starkest data point: OpenAI expects to spend roughly $50 billion on computing power this year, compared to $30 million when the company launched. Brockman also revealed that Musk demanded a 51% controlling stake and the CEO title during the nonprofit era, and that when founders refused, Musk told the room he could start a rival "in one tweet." Week two concluded with testimony from Shivon Zilis, a former OpenAI board member and mother of four of Musk's children.
Industry
OpenAI and Anthropic debut rival enterprise JVs on the same day, targeting private-equity portfolios
In a near-simultaneous maneuver, Anthropic announced a $1.5 billion joint venture backed by Blackstone, Goldman Sachs, and Hellman & Friedman on May 4, while Bloomberg reported hours earlier that OpenAI was raising $4 billion for its own "Deployment Company" at a $10 billion valuation. Both ventures adopt Palantir's forward-deployed engineer playbook - embedding teams inside portfolio companies - and signal that the frontier labs now want a direct cut of enterprise implementation revenue, not just API fees.
Labor
Meta's May 20 layoff wave arrives in days: 8,000 jobs cut, 6,000 roles scrapped, with more planned for H2
With the first wave of Meta's 2026 restructuring now less than two weeks away, the scope is coming into focus. The company is eliminating approximately 8,000 positions, cancelling 6,000 open requisitions, and reorganising surviving teams into AI-focused "pods." The cuts are structural, not performance-based, and are funded by a $115–135 billion AI infrastructure budget - a 73% increase over 2025. A second wave is planned for H2. Executives, meanwhile, are eligible for stock options worth up to $921 million each if Meta hits a $9 trillion market cap by 2031.
Industry
Microsoft's AI business crosses $37B annualized run rate as Copilot surpasses 20 million paid seats
Microsoft's fiscal Q3 results, reported April 29, put its AI business run rate above $37 billion and confirmed Copilot has cleared 20 million paid seats, up from 15 million in January. Azure revenue grew 40% year-over-year, with management attributing roughly half of that growth to AI services. The company has committed to $190 billion in capital expenditure for calendar year 2026, nearly all of it AI infrastructure.
Research
Google researchers introduce AI Co-Mathematician, score 48% on FrontierMath Tier 4 — a new record
A paper posted to arXiv on May 7 by a large Google team introduces an agentic "AI co-mathematician" - a stateful workbench that helps researchers ideate, search literature, run computational experiments, and construct proofs. In early tests it helped solve previously open problems and surfaced overlooked references. On FrontierMath Tier 4, a benchmark of research-level problems designed to resist memorization, the system scored 48%, the highest mark recorded by any AI evaluated on the benchmark.
Research
New paper: using AI agents to automate alignment research may produce "compelling but catastrophically misleading" safety assessments
A preprint by Bowkis, Buhl, Pfau, and Geoffrey Irving - posted May 7 - argues that the leading proposal for aligning superintelligent AI (using AI agents to do alignment research) contains a structural flaw: alignment involves tasks with fuzzy, hard-to-verify evaluation criteria, so agent-generated errors will concentrate precisely where human reviewers are least likely to catch them. The authors warn that even non-scheming agents could, through correlated mistakes and opaque reasoning, produce safety assessments that appear sound but are catastrophically wrong.