====== Yann LeCun AMI Labs ====== **Advanced Machine Intelligence Labs (AMI Labs)** is a Paris-based AI startup co-founded by Turing Award winner **Yann LeCun** after his departure from Meta. Announced on March 9, 2026, AMI Labs raised **$1.03 billion in seed funding** at a **$3.5 billion pre-money valuation** — Europe's largest seed round on record. The company is building **world models** based on LeCun's Joint Embedding Predictive Architecture (JEPA), targeting industrial, robotic, and healthcare applications where the limitations of large language models are most consequential. ((Source: [[https://techcrunch.com/2026/03/09/yann-lecuns-ami-labs-raises-1-03-billion-to-build-world-models/|TechCrunch — AMI Labs Raises $1.03B]])) ===== Founding and Team ===== LeCun co-founded AMI Labs with **Alexandre LeBrun**, who previously founded Wit.ai (acquired by Facebook in 2015) and later served as CEO of Nabla, a digital health startup. Both founders reached the same conclusion: LLMs hallucinate, and that hallucination problem represents a hard ceiling — especially in safety-critical domains like healthcare. ((Source: [[https://techcrunch.com/2026/03/09/yann-lecuns-ami-labs-raises-1-03-billion-to-build-world-models/|TechCrunch — AMI Labs]])) The research team includes: * **Saining Xie** — computer vision researcher * **Pascale Fung** — NLP and multilingual AI expert * **Michael Rabbat** — distributed optimization researcher AMI Labs' first partner is **Nabla**, LeBrun's digital health company, signaling healthcare as an early application domain. ===== The World Models Vision ===== LeCun has long argued that the current path of large language models — relying on next-token prediction — is a **dead end for achieving human-level AI**. While LLMs excel at mimicking language patterns, they lack fundamental understanding of the physical world. ((Source: [[https://howaiworks.ai/blog/le-world-model-jepa-architecture|HowAIWorks — LeWorldModel JEPA Breakthrough]])) World models, by contrast, learn internal representations of how the physical world works: * How objects move, interact, and behave * Cause-and-effect relationships in physical environments * Spatial reasoning and physical intuition This approach aims to produce AI that can plan, reason about consequences, and interact with the real world — capabilities that text-based models fundamentally lack. ===== JEPA Architecture ===== AMI Labs' technical foundation is the **Joint Embedding Predictive Architecture (JEPA)**, a self-supervised learning framework developed by LeCun and his team at Meta AI: * **Core principle:** Instead of predicting raw pixels or tokens, JEPA predicts abstract representations in a learned embedding space * **Key advantage:** Avoids the "collapse problem" where world models produce trivial or degenerate predictions * **V-JEPA:** A video-focused variant that learns from video data to understand temporal dynamics and physical interactions ==== LeWorldModel (LeWM) ==== On March 24, 2026, LeCun's team published **LeWorldModel (LeWM)**, the first end-to-end JEPA system trained from **raw pixels** that successfully solves the collapse problem in world models. ((Source: [[https://howaiworks.ai/blog/le-world-model-jepa-architecture|HowAIWorks — LeWorldModel JEPA Breakthrough]])) ((Source: [[https://www.marktechpost.com/2026/03/23/yann-lecuns-new-leworldmodel-lewm-research-targets-jepa-collapse-in-pixel-based-predictive-world-modeling/|MarkTechPost — LeWorldModel Research]])) This represents a significant technical milestone, as previous attempts at building world models from pixels suffered from representation collapse. ===== How This Differs from LLMs ===== | Aspect | Large Language Models | World Models (JEPA) | | Training data | Text tokens | Visual/physical data, video, sensor input | | Prediction target | Next token | Abstract representations of future states | | World understanding | Statistical patterns in language | Physical causality and spatial reasoning | | Hallucination | Inherent to architecture | Mitigated by grounding in physical reality | | Applications | Text generation, coding, chat | Robotics, healthcare, autonomous systems | ===== Funding Details ===== The $1.03 billion seed round was co-led by: * **Cathay Innovation** * **Greycroft** * **Hiro Capital** * **HV Capital** * **Bezos Expeditions** Additional participation from **NVIDIA** and numerous venture firms. ((Source: [[https://www.hpcwire.com/aiwire/2026/03/11/yann-lecuns-ami-secures-1b-seed-to-develop-ai-world-models/|HPCwire — AMI Secures $1B Seed]])) ===== LeCun's Criticism of Current AI ===== LeCun has been a vocal critic of the LLM paradigm, arguing: * Next-token prediction cannot lead to genuine understanding * LLMs are "stochastic parrots" that manipulate symbols without comprehension * Autoregressive generation is fundamentally limited for planning and reasoning * True intelligence requires learning from sensory experience, not just text AMI Labs is his attempt to prove this thesis by building an alternative path to advanced AI. CEO LeBrun has been explicit that AMI Labs is **fundamental research with no near-term product or revenue** — potentially a 5-10 year endeavor. ((Source: [[https://techcrunch.com/2026/03/09/yann-lecuns-ami-labs-raises-1-03-billion-to-build-world-models/|TechCrunch — AMI Labs]])) LeBrun has also predicted: "My prediction is that 'world models' will be the next buzzword. In six months, every company will call itself a world model to raise funding." ((Source: [[https://techcrunch.com/2026/03/09/yann-lecuns-ami-labs-raises-1-03-billion-to-build-world-models/|TechCrunch — AMI Labs]])) ===== See Also ===== * [[world_models|World Models]] * [[jepa|Joint Embedding Predictive Architecture]] * [[yann_lecun|Yann LeCun]] ===== References =====