The LeCun Pivot: Why the Smartest Researcher in AI Just Changed His Mind—Publicly

Yann LeCun, the Turing Award winner who helped build the GPU-fueled LLM machine, just walked away from it. He didn't retire. He didn't fade. He started a new company and said out loud: we've been optimizing the wrong problem.

That's not ego protection. That's credibility.

What changed

For three years, while Meta poured hundreds of billions into scaling language models, LeCun watched the returns flatten. Llama 4 was supposed to be the inflection point. Instead, the benchmarks were manipulated and the real-world performance was middling. Not because he lacked conviction—because he paid attention to what the data was actually saying.

His diagnosis: predicting the next token in language space isn't how intelligence works. A four-year-old processes more visual data in four years than all of GPT-4's training combined. Yet that child learns to navigate the physical world. Our LLMs can pass the bar exam but can't figure out if a ball will clear a fence.

The implication: we've been solving the wrong problem at massive scale.

The funder's dilemma

Here's what makes this important for founders and investors: LeCun isn't alone. Ilya Sutskever left OpenAI making the same call. Gary Marcus has been saying it for years. The question isn't whether they're right—it's how to position when the entire industry is collectively getting less wrong, but slowly.

LeCun's answer is world models—systems that learn to predict and simulate physical reality, not language. Instead of tokens, predict future world states. Instead of chatbots, build systems that understand causality, physics, consequence.

Theoretically sound. Practically? Still fuzzy.

His JEPA architecture learns correlations in representation space, not causal relationships. Marcus, his longtime critic, correctly notes this: prediction of patterns is not understanding of causes. A system trained only on balls going up would learn that "up" is the natural law. It wouldn't understand gravity. Same correlation problem, new wrapper.

What founders should actually watch

The real lesson isn't which architecture wins. It's that capital allocation is broken and about to correct.

Hundreds of billions flowed into scaling LLMs because the returns were obvious and fast—chips, cloud, closed APIs. The infrastructure calcified. Investors became trapped in the installed base. When the problem shifted from "scale faster" to "solve different," the entire system had inertia.

Now LeCun, with €500 million and Meta's partnership, is betting that world models will see traction faster than skeptics expect. Maybe he's right. Maybe the robotics industry, tired of neural networks that fail on novel environments, will actually deploy these systems. Maybe autonomous vehicles finally move because prediction of physical futures beats reactive pattern-matching.

Or maybe it takes a decade and world models remain research while LLMs compound their current dominance.

For founders: this is the opening. When paradigm-level uncertainty exists, the cost of hedging drops. Build toward physical understanding, not linguistic sophistication. Robotics, manufacturing, autonomous systems—these verticals benefit immediately from world models and can't be solved by bigger LLMs. That's your wedge.

The adaptability play

What separates LeCun's move from ego-driven pivots: he didn't blame market conditions or bad luck. He said: "I was wrong about where to allocate effort, and here's why."

That transparency that public course-correction without shame changes how people bet on him.

The founders who win in 2026-2027 won't be the ones married to LLM scaling or world model purity. They'll be the ones who notice when reality diverges from the plan and move—fast, openly, without defensiveness.

LeCun just did that at scale.

The question isn't whether he's right about world models. It's whether his willingness to change publicly, with evidence, keeps him first-mover on whatever intelligence actually looks like next.

Climate Tech in 2026: The Founder's Playbook

The climate tech sector is correcting. After the hype peak of 2021—$51 billion in funding—reality hit hard. Funding dropped 75% by 2024. AI vacuumed up the oxygen. Generalist VCs fled. What's left is brutal clarity: only capital-efficient, economically-defensible businesses survive. This is actually good news for the right founders.

The pattern is familiar. In Cleantech 1.0, founders built for virtue. They believed the mission transcended unit economics. Investors believed it too. Then gravity reasserted. Now, in 2026, the founders winning aren't the most ideologically pure—they're the fastest learners operating in spaces where climate solutions and profit align naturally. That convergence is real. AI's power hunger is creating $100+ billion in infrastructure demand. Industrial customers are electrifying and saving money. Adaptation is no longer abstract; it's quantifiable risk reduction. The founders who see this clearly, move fast, and adjust when reality diverges from assumptions will outcompete the ones still waiting for policy to validate their dreams.

What Separates Fundable Climate Founders

First: ruthless unit economics focus. Your climate impact is a feature, not your business model. Lead with cost reduction, reliability improvement, or regulatory compliance value. Sustainability is the bonus. Companies solving problems that generate immediate economic returns—regardless of climate benefit—are three years ahead of those betting on green premiums or subsidies. Test this: can your customer buy your product if the Inflation Reduction Act evaporates tomorrow? If not, you're not building a durable business.

Second: capital efficiency like your life depends on it. Hardware climate startups raised massive rounds in 2021 at inflated valuations. They're now burning cash with no path to Series B. The founders winning in 2026 are raising half as much, hitting milestones with it, and extending runway to 24+ months between rounds. This isn't conservatism; it's survival instinct. Assume 18-24 months to next capital, not 12-15. Build backward from that reality.

Third: strategic positioning over technical perfection. Breakthrough innovation is necessary but not sufficient. You need corporate acquirers to see themselves in your business. Identify 3-5 likely buyers (heat pump OEMs, industrial conglomerates, utilities, hyperscalers) before closing your seed round. Pilot with them. Build product around their workflows. The M&A market is open and accelerating—doubling in 2025. Your exit isn't IPO; it's acquisition. Optimize for that.

Fourth: cognitive flexibility meets determined execution. The best climate founders hold convictions lightly. They commit fully to current hypotheses, measure relentlessly, and pivot when data demands it—without ego. Industrial decarbonization looked like broad-market play two years ago; now the wins are vertical-specific (cement, steel, chemicals). Adaptation was niche; now it's 28% of deals. Grid software was boring; now it's critical infrastructure. The founders shipping fast, gathering customer evidence, and adjusting course are outpacing those white-knuckling outdated strategies.

The Operating Tactic That Works

Install a monthly red-team session. Invite your most skeptical advisor, your closest customer, and your CFO. The question: what would kill this business in six months? Force specificity. Pre-commit to metrics that trigger a pivot or sunset. Don't wait for funding to force the reckoning; design it in.

The Move

Climate tech rewards founders who are shameless about changing their mind. Not wishy-washy—decisive. You see new evidence (customer feedback, policy shift, competitive move), you recalibrate, you ship the update. You tell investors exactly why you changed course. That's not weakness; that's intelligence. That's fundability.

The climate problem is still 30 years away from solved. But the winners solving it in 2026 aren't the ones with the best intentions. They're the ones learning fastest and adapting hardest. Ship, test, refactor. That's your operating system.

Meta’s $2B Manus Deal: A Practical Playbook for Ambitious Founders

Founders often ask: “Will more US tech giants buy Asian startups?”

The sharper question is: if only a small fraction of companies generate most of the returns, can you afford to build anything that isn’t capable of becoming a global outlier?

Meta’s US$2+ billion acquisition of Manus—a company founded in Beijing, redomiciled in Singapore, and integrated into Meta’s AI stack in under a year—is not just a China‑US‑Singapore story. It’s a concrete example of how to design a company that can scale across borders, survive geopolitics, and be acquirable at speed.​

What Manus Actually Did

Manus launched publicly around March 2025 with an AI agent that could autonomously research, code, and execute multi‑step workflows. Within roughly eight months it reportedly crossed US$100 million in ARR, reaching a US$125 million revenue run rate before Meta signed the deal.​

Operationally, it:

  • Processed over 147 trillion tokens and supported tens of millions of “virtual computers” spun up by users, which only makes sense at global internet scale.​

  • Ran as an orchestration and agent layer on top of multiple foundation models (including Anthropic and Alibaba’s Qwen), avoiding dependence on a single model provider.​

On the corporate side, Manus:

  • Started in Wuhan and Beijing under Beijing Butterfly Effect Technology, with a mostly China‑based team.​

  • Shifted its headquarters to Singapore in mid‑2025, moving leadership and critical operations out of Beijing.​

  • Restructured so that, by the time Meta announced the acquisition, Chinese ownership and on‑the‑ground China operations would be fully unwound; the company committed to ceasing services in China.​

Meta bought a product already scaled, a revenue engine compounding at nine‑figure ARR, and a structure that could clear US regulatory and political review.​

Geopolitics as a Design Constraint

Manus scaled in the wake of DeepSeek’s R1 moment, when a Chinese lab demonstrated frontier‑class performance at a fraction of Western compute budgets and shook confidence in US AI dominance. That moment accelerated a narrative where AI is treated as strategic infrastructure: tighter export controls, outbound investment restrictions on Chinese AI, and public scrutiny of anyone funding Chinese‑linked AI companies.​

Benchmark’s US$75 million Series B in Manus was investigated under Washington’s new outbound regime and criticized as “funding the adversary.” Two details mattered:​

  • Manus did not train its own foundation models; it built agents on top of existing ones, placing it in a less‑restricted category.​

  • It was structured via Cayman and Singapore, with a stated pivot away from China.​

Meta then finished the derisking: buying out Chinese shareholders, committing to end China operations, and framing Manus as a Singapore‑based AI business joining Meta.​

For founders, the lesson is blunt: jurisdiction, ownership and market footprint now sit beside product and traction as first‑order design choices. They can’t be an afterthought if you want a strategic buyer.

What This Implies for How You Build

The Manus story turns a vague ambition (“go global”) into specific requirements:

1. Infrastructure built for real scale

Handling 147 trillion tokens and millions of ephemeral environments was possible only because Manus was architected from day one to operate like a web‑scale SaaS, not a regional tool. As a founder, that means:​

  • Cloud‑native design with serious observability and reliability.

  • Data and compliance posture that won’t collapse under US or EU due diligence.

2. A team that isn’t anchored to one country

Manus began in China but rapidly built a presence across Singapore, Tokyo and San Francisco, aligning product, sales and hiring with global customers and capital pools. Practically:​

  • At least one founder or senior leader who has operated in major tech hubs.

  • Early design partners or users outside your home market.

3. Legal and cap table flexibility

Manus showed that unlocking a large exit might require:

  • Redomiciling to a neutral or “trusted” jurisdiction like Singapore.

  • Reworking the shareholder base to remove politically sensitive investors.

  • Exiting a big home market entirely, if that market blocks strategic buyers.​

If your current structure makes those moves impossible or prohibitively expensive, your future options are already constrained.

4. Revenue ambition that assumes a global customer

Crossing US$100M ARR in under a year is only achievable if:

  • The problem you solve is universal.

  • Your pricing and packaging make sense for large customers in New York, Berlin or Tokyo, not just in your home market.​

You can start with regional customers, but you should be honest about whether the 100th customer could be a global enterprise rather than just a better‑known local logo.

Three Questions to Ask Yourself Now

If you’re a founder in an emerging market post‑Manus, a simple self‑audit goes a long way:

  1. If a Meta‑scale acquirer appeared in 12 months, what would break first—structure, regulation, or infra?
    Make that list explicit. Those are not “later” issues anymore.​

  2. Could your current architecture handle a 100x increase in usage without a total rebuild?
    If not, you’re placing an invisible ceiling on your own upside before power‑law dynamics can ever help you.​

  3. Do your first 10 hires and first 10 customers make expansion easier or harder?
    Manus’ user base and team footprint made going beyond its origin market feel like scaling, not reinventing.​

The Manus deal doesn’t suggest everyone will be bought for billions. It does show that markets are now rewarding teams that design for scale across borders, anticipate geopolitical friction, and stay acquirable.

If you’re serious about building something that matters, that’s the bar.

2026 is the year we stop using the wrong denominator

Everyone keeps asking: "Can AI do X yet?"

That's the wrong question, in the same way "How many alumni does this university have?" is the wrong question. The question is always: out of what total?

In 2024–2025, AI was graded on the easiest denominator available: best-case prompts, controlled conditions, with a human babysitter. In 2026, the denominator changes to: all the messy, real tasks done by normal people, under time pressure, with reputational and legal consequences.

This shift isn't coming from research labs. It's coming from the fact that AI is moving out of demos and into production systems where failure is expensive.

The "90% accurate" trap (toy example)

Founders love hearing "90% accuracy." Buyers do not.

Imagine an AI agent that helps a sales team by drafting and sending follow-up emails. It takes 10,000 actions/month (send, update CRM, schedule, etc.). A "pretty good" 99% success rate sounds elite—until you do the denominator math.

  • 99% success on 10,000 actions = 100 failures/month.

  • If even 10 of those failures are "high-severity" (wrong recipient, wrong pricing, wrong attachment, embarrassing hallucination), that's not a product. That's a recurring incident program.

Now flip the requirement: if the business can tolerate, say, 1 serious incident/month, then the real bar isn't 99%. It might be 99.99% on the subset of actions that can cause damage (and a forced escalation path on everything uncertain). This is why "accuracy" is the wrong headline metric; the real metric is incidents per 1,000 actions, segmented by severity, plus time-to-detect and time-to-recover.

Most founders still pitch on accuracy. Smart buyers ask for the incident dashboard first.

A founder vignette (postmortem-style)

A founder ships an "autonomous support agent" into production for a mid-market SaaS. The demo crushes: it resolves tickets, updates the CRM, and drafts refunds. Two weeks later, the customer pauses rollout—not because the agent is dumb, but because it's unmeasured. No one can answer: "How often does it silently do the wrong thing?" The agent handled 3,000 tickets, but three edge cases triggered a nasty pattern: it refunded the wrong plan tier twice and sent one confidently wrong policy explanation that got forwarded to legal. The customer doesn't ask for a bigger model. They ask for logging, evals, and hard controls: "Show me the error distribution, add an approval queue for refunds, and give me an incident dashboard." The founder realizes the real product isn't "an agent." It's a managed system with guardrails and proof. Everything that came before was a science fair project.

The real metric: evals become the business model

The most valuable AI startups in 2026 won't win by shouting "state of the art." They'll win by making buying safe.

That means being able to say, quickly and credibly:

  • "Here's performance on your distribution (not our demo)."

  • "Here's what it does when uncertain: abstain, ask, escalate."

  • "Here's the weekly report: incident rate, severity mix, and top failure modes."

In other words, evaluation becomes the business model: trust, control, and accountability are what unlock budget.

Vendors who can't report these metrics weekly aren't ready for revenue. They're still playing.

Agents will grow up: boring, instrumented operations

"Agents" will keep getting marketed as autonomous employees. But founders who actually want revenue will build something more boring and more real:

  • Narrow scope (fewer actions, done reliably).

  • Hard permissions and budgets (prevent expensive mistakes).

  • Full observability (every action logged, queryable, auditable).

  • Explicit escalation paths (humans handle the tail risk).

When the denominator becomes "all actions in production," reliability and containment beat cleverness—every time. The vanity metric is "tickets touched." The real metric is "severity-weighted incident rate per 1,000 actions." Most founders optimize for the first. Smart ones optimize for the second.

One founder test for 2026

If someone claims "AI is transforming our customer's business," ask for one number:

"What percentage of their core workflows run with logged evals, measured incident rates, and defined escalation policies?"

If the answer is fuzzy, it's still a prototype. If it's precise and improving week-over-week, it's a product. If you can't report it, you can't scale it.