Yann LeCun, the Turing Award winner who helped build the GPU-fueled LLM machine, just walked away from it. He didn't retire. He didn't fade. He started a new company and said out loud: we've been optimizing the wrong problem.
That's not ego protection. That's credibility.
What changed
For three years, while Meta poured hundreds of billions into scaling language models, LeCun watched the returns flatten. Llama 4 was supposed to be the inflection point. Instead, the benchmarks were manipulated and the real-world performance was middling. Not because he lacked conviction—because he paid attention to what the data was actually saying.
His diagnosis: predicting the next token in language space isn't how intelligence works. A four-year-old processes more visual data in four years than all of GPT-4's training combined. Yet that child learns to navigate the physical world. Our LLMs can pass the bar exam but can't figure out if a ball will clear a fence.
The implication: we've been solving the wrong problem at massive scale.
The funder's dilemma
Here's what makes this important for founders and investors: LeCun isn't alone. Ilya Sutskever left OpenAI making the same call. Gary Marcus has been saying it for years. The question isn't whether they're right—it's how to position when the entire industry is collectively getting less wrong, but slowly.
LeCun's answer is world models—systems that learn to predict and simulate physical reality, not language. Instead of tokens, predict future world states. Instead of chatbots, build systems that understand causality, physics, consequence.
Theoretically sound. Practically? Still fuzzy.
His JEPA architecture learns correlations in representation space, not causal relationships. Marcus, his longtime critic, correctly notes this: prediction of patterns is not understanding of causes. A system trained only on balls going up would learn that "up" is the natural law. It wouldn't understand gravity. Same correlation problem, new wrapper.
What founders should actually watch
The real lesson isn't which architecture wins. It's that capital allocation is broken and about to correct.
Hundreds of billions flowed into scaling LLMs because the returns were obvious and fast—chips, cloud, closed APIs. The infrastructure calcified. Investors became trapped in the installed base. When the problem shifted from "scale faster" to "solve different," the entire system had inertia.
Now LeCun, with €500 million and Meta's partnership, is betting that world models will see traction faster than skeptics expect. Maybe he's right. Maybe the robotics industry, tired of neural networks that fail on novel environments, will actually deploy these systems. Maybe autonomous vehicles finally move because prediction of physical futures beats reactive pattern-matching.
Or maybe it takes a decade and world models remain research while LLMs compound their current dominance.
For founders: this is the opening. When paradigm-level uncertainty exists, the cost of hedging drops. Build toward physical understanding, not linguistic sophistication. Robotics, manufacturing, autonomous systems—these verticals benefit immediately from world models and can't be solved by bigger LLMs. That's your wedge.
The adaptability play
What separates LeCun's move from ego-driven pivots: he didn't blame market conditions or bad luck. He said: "I was wrong about where to allocate effort, and here's why."
That transparency that public course-correction without shame changes how people bet on him.
The founders who win in 2026-2027 won't be the ones married to LLM scaling or world model purity. They'll be the ones who notice when reality diverges from the plan and move—fast, openly, without defensiveness.
LeCun just did that at scale.
The question isn't whether he's right about world models. It's whether his willingness to change publicly, with evidence, keeps him first-mover on whatever intelligence actually looks like next.