Yann LeCun has spent years insisting that the AI industry is climbing the wrong mountain. On March 9, he raised $1.03 billion to prove it.
AMI Labs - short for Advanced Machine Intelligence - announced a seed round of that size at a $3.5 billion pre-money valuation, making it reportedly one of the largest seed rounds in European venture history.[1] The Paris-headquartered lab, co-founded by LeCun and Alexandre LeBrun, who founded the digital health company Nabla, is building what it calls world models: AI systems designed to understand physical reality, reason about it, and plan within it - rather than predict the next token in a sequence.
The distinction matters. It is also, at this stage, largely theoretical.
LeCun's case against large language models is well-established and deliberately provocative. He has argued for years that LLMs are fundamentally limited by their architecture: they are trained to predict text, and text alone cannot encode the kind of grounded, causal, sensorimotor understanding that underlies genuine intelligence. Hallucination, he contends, is not an engineering bug to be patched - it is a structural consequence of models that lack any internal model of reality against which to check their outputs.
LeBrun arrived at the same conclusion from the clinic. As co-founder and former CEO of Nabla, a digital health company deploying AI to medical professionals, he found that LLM hallucinations were not a nuisance but a potential liability - one with life-threatening stakes.[1] "AMI Labs is a very ambitious project, because it starts with fundamental research," LeBrun told TechCrunch. "It's not your typical applied AI startup that can release a product in three months, have revenue in six months, and make $10 million in [annual recurring revenue] in 12 months."[1]
The technical foundation for AMI Labs predates the company by three years. In a June 2022 position paper, "A Path Towards Autonomous Machine Intelligence," LeCun proposed JEPA - the Joint Embedding Predictive Architecture - as an alternative to generative models.[2] Where a generative model like a language model or diffusion model learns to reconstruct or produce outputs in observable space (text, pixels), JEPA learns to make predictions in an abstract representation space, deliberately discarding unpredictable details about the world in favor of structured, higher-level representations.
The practical implication is significant. Real-world sensor data - video, audio, proprioceptive signals from robots - is largely unpredictable at the pixel or sample level. A generative model trained to reproduce such data must either hallucinate the unpredictable parts or fail. A JEPA-based world model, by operating in representation space rather than output space, sidesteps the problem: it only needs to predict what is predictable, encoding uncertainty rather than papering over it.
Sign in to join the discussion.
AMI Labs' stated goal is to build systems that "(1) understand the real world, (2) have persistent memory, (3) can reason and plan, (4) are controllable and safe" - a direct translation of LeCun's 2022 cognitive architecture into a commercial research agenda.[3] Target applications include industrial process control, automation, wearable devices, robotics, and healthcare.
The founding team is notably deep for a lab at this stage. LeCun serves as chairman; LeBrun as CEO. Laurent Solly, formerly Meta's VP for Europe, is COO. The scientific leadership includes Saining Xie as chief research and innovation officer, Pascale Fung as chief science officer, and Michael Rabbat - a longtime Meta AI Research scientist - as VP of world models.[1]
The investor list reflects the breadth of the bet. The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, and includes strategic backing from Nvidia, Samsung, Toyota Ventures, and Temasek, as well as individual investors Tim Berners-Lee, Jim Breyer, Mark Cuban, and Eric Schmidt.[1] The mix of compute providers (Nvidia), industrial conglomerates (Samsung, Toyota), and sovereign and state-backed funds (Temasek, and French public investment bank Bpifrance) signals an unusually explicit alignment between AMI's research agenda and the needs of industries where reliability is non-negotiable.
Operations are planned across four cities: Paris (headquarters), New York (where LeCun holds a professorship at NYU), Montreal (Rabbat's base), and Singapore (both for talent and Asian market access).[1]
AMI Labs has no near-term revenue plans. LeBrun is explicit: this is foundational research with a multi-year horizon before commercial applications become viable. Nabla is named as the first disclosed partner for early model access, but no product timeline has been given.
The honest counterargument is that the gap between JEPA as a theoretical architecture and JEPA as a deployed intelligence system is enormous - and largely uncharted. LeCun's 2022 paper is a position paper, not a proof of concept. I-JEPA and V-JEPA, the most mature implementations, have shown promise on vision tasks but remain far from the kind of general world-modeling AMI is pursuing. Meanwhile, frontier LLM labs are not standing still: multimodal models are increasingly trained on video and sensor data, and the line between "language model" and "world model" is blurring in practice, whatever the theoretical distinctions.
LeBrun appears aware of the irony - and the risk of becoming a buzzword victim rather than the paradigm-builder. "My prediction is that 'world models' will be the next buzzword," he told TechCrunch. "In six months, every company will call itself a world model to raise funding."[1]
The implication is that AMI Labs intends to be the real thing rather than the marketing appropriation of the idea. Whether a billion dollars and LeCun's intellectual authority can bridge the gap between a 2022 position paper and a production-grade AI system that genuinely understands the world remains, for now, an open question - and perhaps the most consequential one in AI research.