
Sign in to join the discussion.
On April 8, Elon Musk posted a list to X - seven models currently in simultaneous training on Colossus 2, the supercluster now operating under SpaceX's umbrella after SpaceX acquired xAI in a $1.25 trillion share-exchange deal announced February 2.[1] The list ended with a four-word caption: "Some catching up to do."
That framing is worth interrogating. The company currently trailing in Musk's own assessment is running the largest publicly confirmed training infrastructure in the industry, burning through an estimated 400 megawatts of dedicated power, built on roughly 550,000 NVIDIA Blackwell GPUs at an estimated hardware cost of $18 billion.[2] If that is catching up, what does leading look like?
Colossus 2 is training the following seven models concurrently: Grok Imagine V2 (image and video generation), two variants of a 1-trillion-parameter model, two variants at 1.5 trillion parameters, one 6-trillion-parameter model, and one 10-trillion-parameter model.[1] No other lab has publicly confirmed a training run at either the 6T or 10T scale. GPT-5.4, currently the leading model on independent benchmarks from Artificial Analysis, does not have a publicly disclosed parameter count. Neither does Claude Opus 4.7 or Gemini 3.1 Pro.
The cluster operates under the "MACROHARD" brand - a name with a more complicated history than it first appears. xAI filed the trademark in August 2025 and had it painted on the building's roof by October, but the project's formal public launch came on March 11, 2026, when Musk announced it as a joint Tesla-xAI initiative.[8] Also called "Digital Optimus," MACROHARD pairs xAI's Grok as a high-level reasoning layer with a Tesla-built AI agent that processes real-time screen video and executes keyboard and mouse actions - an architecture Musk compared to Kahneman's dual-process theory, with Tesla handling fast reaction (System 1) and Grok handling deliberate reasoning (System 2). The stated ambition: a system capable of emulating the functions of entire software companies. The legal complication: Tesla shareholders are actively suing Musk for breach of fiduciary duty over the founding of xAI, and this announcement - in which xAI's technology is explicitly the "brain" directing Tesla hardware - materially strengthens that case.[9]
The 10T model is the headline figure, but the 6T variant may be the more immediately relevant product. Community reporting and xAI's own roadmap signals suggest the 6T model - almost certainly a Mixture-of-Experts architecture - is the leading candidate to become Grok 5, the next public release.[3] A Q2 2026 public beta is the most widely cited estimate, though xAI has made no official announcement.
The history of frontier AI is littered with impressive-sounding numbers that did not survive contact with benchmarks. Parameter count, in isolation, tells you almost nothing about what a model can do. What matters more is the ratio of active parameters per token at inference time - a function of how efficiently an MoE routing mechanism allocates experts - the quality and composition of training data, and the amount of inference-time compute available during deployment.
xAI's current flagship, Grok 4.20 (launched in beta February 17, 2026, with full release on March 3), illustrates both the ceiling and the floor.[4] On the Artificial Analysis Intelligence Index - an independent composite benchmark covering ten evaluations including Humanity's Last Exam, GPQA Diamond, and SciCode - Grok 4.20 scores 49 out of 100, ranking 7th on the Artificial Analysis homepage leaderboard snapshot and 12th of 133 models on the full index, behind a three-way tie at the top (Claude Opus 4.7, Gemini 3.1 Pro Preview, and GPT-5.4, all at 57), Muse Spark and Claude Sonnet 4.6 (both at 52), and GLM-5.1 (51).[5] Where Grok 4.20 does lead is on raw output speed - 168 to 176 tokens per second, placing it among the three fastest models overall on the Artificial Analysis leaderboard and ahead of every model in the top intelligence tier - and on non-hallucination rate, an independently verified 83% that puts it ahead of all three models tied at the summit of the intelligence index.[4]
The gap between parameter scale and benchmark performance is not new, but it is newly visible at this level of investment. A 10T parameter model that underperforms a lean, well-tuned 400B MoE system on real-world coding or reasoning tasks would represent one of the most expensive lessons in the industry's short history.
Raw benchmark competition is not the only reason Colossus 2 exists at this scale. The acquisition of xAI by SpaceX, announced February 2, was explicitly justified in part by the ambition to build "orbital data centers" - compute infrastructure deployed in space, delivered via Starlink, to serve global AI workloads.[6] That goal requires a ground-based training operation large enough to generate the models those orbital nodes would serve.
There is also a near-term revenue angle that has received less attention than the raw compute announcements. Reports emerged this week that xAI is renting tens of thousands of Colossus 2 GPUs to Cursor, the AI coding startup, to train Cursor's Composer 2.5 model.[7] Back-of-envelope estimates based on prevailing GPU rental rates ($2.50 to $18 per GPU per hour) put the potential value of a 50,000-GPU arrangement at $75 million to $200 million per month - a figure that, if the deal scales, could meaningfully offset Colossus 2's operating costs while xAI's own frontier models complete training. It is an unusual posture: building the world's largest private AI cluster and then renting idle capacity to a competitor to reach breakeven.
Musk has stated publicly that his estimate of Grok 5 achieving AGI stands at 10% - a number calibrated to generate attention while technically remaining a minority probability.[3] More concretely, the 10T model would need to demonstrate that raw scale, when combined with Colossus 2's training data density and multi-agent inference architecture, produces qualitatively new capabilities rather than incremental benchmark improvements.
The precedent is not encouraging. The scaling hypothesis - that more parameters reliably produce more capable models - has held broadly but not universally, and the industry has spent the last two years chasing inference-time compute and data quality as complementary or even superior levers. OpenAI's GPT-5.4 and Anthropic's Claude Opus 4.7 both compete at the frontier without publicly claiming parameter counts anywhere near 10T. If either outperforms xAI's 10T model on independent benchmarks at a fraction of the inference cost, the case for Colossus 2's capital intensity becomes very hard to make.
The more defensible argument for the scale is systemic rather than model-by-model: a single cluster training seven models simultaneously, from a 1T workhorse to a 10T frontier system, gives xAI the ability to serve a wider range of deployment economics than any lab running a single flagship. That diversity, combined with the GPU rental business, may matter more to the company's financial survival than whether the 10T model tops any particular leaderboard.
The question is not whether 10 trillion parameters represents an engineering achievement - it clearly does. The question is whether it represents a business strategy, and on that count, xAI has not yet made its case.
Elon Musk on X: "SpaceXAI Colossus 2 now has 7 models in training" (April 7, 2026) Inline ↗
NextBigFuture: XAI Training 10 Trillion Parameter Model (April 9, 2026) Inline ↗
FelloAI: Grok 5 Release Date and All We Know So Far (April 2026) Inline ↗
Artificial Analysis: Grok 4.20 0309 v2 - speed, hallucination rate, and benchmark data Inline ↗
Artificial Analysis Intelligence Index v4.0: Frontier model rankings (April 2026) Inline ↗
CNBC: Musk's xAI, SpaceX deal confirmed, valued at $1.25 trillion (February 3, 2026) Inline ↗
Business Insider: Elon Musk's xAI plans to supply AI computing power to Cursor (April 2026) Inline ↗
Technology.org: Elon Musk Launches "Macrohard" - A Tesla-xAI Hybrid Built to Clone Entire Software Companies (March 12, 2026) Inline ↗
Electrek: Musk confirms xAI-Tesla joint "Digital Optimus" project - after saying Tesla didn't need xAI (March 11, 2026) Inline ↗