Yann LeCun Raises $1 Billion to Build AI Beyond Large Language Models

16 Mar 2026, 1:31 pm GMT

AMI Labs — Key Numbers

$1.03B

Funding raised

$3.5B

Valuation

2022

JEPA proposed


Yann LeCun, Turing Award winner and Chief AI Scientist at Meta,  has long been one of the most prominent critics of the large language model paradigm. Now he is putting serious capital behind that conviction. Advanced Machine Intelligence (AMI) Labs, his new venture, has closed a $1.03 billion funding round at a $3.5 billion valuation with a mandate to build AI that learns from the physical world ,not from predicting the next word or pixel.

AMI Labs has identified automakers, self-driving car companies, and operators of complex physical systems as its primary commercial targets. But the ambition is broader: to replace the dominant architecture of modern AI with something it argues is fundamentally more capable.

There is this herd effect where everyone in Silicon Valley has to work on the same thing. It does not leave much room for other approaches that may be much more promising in the long term.

— Yann LeCun, The New York Times, January 2026


The Problem with Predicting Tokens

Today's leading AI systems are autoregressive: they predict the next token, a word, a pixel patch, a slice of audio, based on everything that came before. For language, this works well. But in longer reasoning tasks or the physical world, each prediction samples from a probability distribution, meaning a single early error compounds exponentially into unreliable outputs. Hallucinations are not a tuning problem, they are an architectural inevitability. As LeCun wrote on X in September 2024, 

Pure Auto-Regressive LLMs are a dead end on the way towards human-level AI.

Generative models also waste enormous compute trying to reconstruct irrelevant surface detail, predicting the exact path of a falling leaf rather than understanding the physics behind it.


JEPA: a different approach

What is Joint Embedding Predictive Architecture (JEPA)?
What is Joint Embedding Predictive Architecture (JEPA)?

AMI Labs is built on the Joint Embedding Predictive Architecture (JEPA), which LeCun proposed in 2022. Rather than reconstructing raw inputs, JEPA maps observations into compressed abstract representations ,embeddings of hundreds or thousands of features versus millions of pixels and predicts future states in that abstract space. The system learns the rules of physics, spatial reasoning, and cause-and-effect without wasting compute on background noise.

JEPA architecture: key iterations

  • V-JEPA: Physical world modelling learned purely from video, no labels or text required.
  • V-JEPA 2: Zero-shot deployment on physical robots for pick-and-place tasks with minimal training data.
  • C-JEPA: Designed to model causal relationships between objects in a scene.

How AMI Labs fits in the market

AMI Labs enters a well-funded world model space, but with a sharply different philosophy from its competitors.

Google DeepMind — Genie 3

A fully generative autoregressive video model creating real-time 3D environments frame-by-frame from text prompts. Already used by Waymo for self-driving car training data.

World Labs (Fei-Fei Li)

Raised $1 billion to build explicit 3D scene representations using 3D Gaussian Splatting. Product "Marble" generates photorealistic, editable 3D environments exportable to Unreal Engine.

World Labs Marble Creates 3D Worlds with AI - Content + Technology
Fei-Fei Li's World Labs

Where rivals build simulation environments for training AI, AMI Labs targets inference-time intelligence , the decision-making layer that operates in the real world. Its bet is that abstract embeddings are far more compute-efficient than pixel-level prediction, unlocking AI systems capable of running on edge devices and resource-constrained hardware. The company also plans to open-source significant portions of its codebase to build a surrounding developer ecosystem.

Share this

Sara Srifi

Sara is a Software Engineering and Business student with a passion for astronomy, cultural studies, and human-centered storytelling. She explores the quiet intersections between science, identity, and imagination, reflecting on how space, art, and society shape the way we understand ourselves and the world around us. Her writing draws on curiosity and lived experience to bridge disciplines and spark dialogue across cultures.