Generate:Biomedicines just announced Phase 3 trials for GB-0895, an antibody entirely designed by AI, recruiting patients from 45 countries as of late 2025. Isomorphic Labs has human trials "very close." That's not hype. That's proof that AI-designed drugs work in humans.
And the market hasn't priced this in yet.
Generative biology, applying the same transformer architectures behind ChatGPT to protein design doesn't incrementally improve drug discovery. It compresses it. Traditional timelines: 6 years from target to first human dose. Generative biology: 18-24 months. That's not faster iteration. That's a category shift.
Here's what's actually happening: A handful of well-funded companies have already won the scaling race. Profluent's ProGen3 model demonstrated something critical that scaling laws (bigger models = better results) apply to protein design just like they do to LLMs. The company raised $106M in Series B funding in November 2025. EvolutionaryScale built ESM3, a 98-billion-parameter model trained on 2.78 billion proteins, and created novel GFP variants that simulate 500 million years of evolution computationally. Absci is validating 100,000+ antibody designs weekly in silico, reducing discovery cycles from years to months.
These aren't startups anymore. They're infrastructure.
The Market Opportunity Is Massive, But Concentrated
The AI protein design market is $1.5B today (2025) and grows to $7B by 2033 (25% CAGR). Protein engineering more broadly: $5B → $18B in the same window. But here's the friction: success requires vertical integration. Algorithms alone are defensible for exactly six months. What matters is the ability to design, synthesize, test, and iterate at scale: wet lab automation, manufacturing readiness, regulatory playbooks.
Generate raised $700M+ because it built all three. Profluent raised $150M because it owns the data and the model. Absci went public because it combined proprietary platform with clinical validation. The solo-algorithm play? Dead on arrival.
This matters for founders evaluating entry points. The winning thesis isn't "better protein design." It's "compressed drug discovery + manufacturing at scale + regulatory clarity." Pick one of those three and you're a feature. Own all three and you're a platform.
Follow the Partnerships, Not the Press Releases
Novartis: $1B deal with Generate:Biomedicines (Sept 2024). Bristol Myers Squibb: $400M potential with AI Proteins (Dec 2024). Eli Lilly + Novartis: Both partnered with Isomorphic Labs. Corteva Agrisciences: Multi-year collab with Profluent on crop gene editing.
These deals aren't about technology proving. They're about risk transfer. When Novartis commits $1B and strategic alignment, they're not hedging on whether AI-designed proteins work they're betting on speed-to-market mattering more than incremental efficacy improvements. That's a macro signal: pharma's risk tolerance is shifting from "is it better?" to "can we deploy it in 36 months?"
For investors, this is the tell. Follow where the check sizes are growing, not where the valuations are highest.
The Real Risk Isn't Technical—It's Regulatory and Biosecurity
Can generative biology design novel proteins? Yes. Can those proteins fold predictably? Mostly. Will they work in vivo? That's the test happening right now in Phase 3 trials.
But the bigger risk is slower: regulatory alignment. Agencies are adapting, but they're not leading. Gene therapy has 3,200 trials globally. Only a fraction navigated the approval gauntlet successfully. AI-designed therapeutics will face the same friction unless founders invest heavily in regulatory affairs early not late.
And then there's dual-use risk. Generative biology lowers barriers to misuse. AI models could design pathogens or toxins for bad actors. This isn't hypothetical, it's why 94% of countries lack biosecurity governance frameworks. Founders that build secure-by-design architectures and engage proactively with regulators on dual-use mitigation will differentiate themselves sharply from those that don't.
The Next 24 Months: Clinical Data Wins. Everything Else Is Narrative
Generate's Phase 3 readout will determine whether the market reprices generative biology from "interesting" to "inevitable." If it works, expect a flood of follow-on funding, accelerated IND filings, and a stampede of partnerships. If it fails or if safety signals emerge you'll see valuation compression and investor skepticism that lasts years.
For founders: don't chase market size. Chase clinical validation. For investors: don't chase valuations. Chase clinical milestones.
The inflection point is here. The question is whether you're positioned to capture it or just watch it pass.
Naval called Moltbook the “new reverse Turing test,” and everyone immediately treated it like a profound milestone. I think it’s something else: a live-fire test of whether we can contain agentic systems once they’re networked together.
Let’s be precise. Moltbook is an AI-only social platform, roughly “Reddit, but for agents,” where humans can watch but not participate. The pitch is simple: observe how AI agents behave socially when left alone. Naval’s label is elegant because it implies the agents are now the judges—humans are the odd ones out.
But if you’re a founder or an operator, you should ignore the poetry and ask: what is the product really doing to the world?
Moltbook’s real innovation is not “AI social behavior.” It’s a new topology: lots of agents, from different builders, connected in a public arena where they can feed each other instructions, links, and narratives at scale. That’s not a reverse Turing test. It’s a coordination surface.
And coordination surfaces create externalities.
In the old internet, humans spammed humans. In the new internet, agents will spam agents—except “spam” won’t just be annoying; it will be executable. If you give agents permissions (email, calendars, bank access, code execution, “tools”), and then you let them ingest untrusted content from a network like Moltbook, you are building the conditions for what security folks call the “lethal trifecta.”
This is where the discussion gets serious.
Forbes contributor Amir Husain’s critique is basically a warning about permissions: people are already connecting agents to real systems—home devices, accounts, encrypted messages, emails, calendars—and then letting those agents interact with unknown agents in a shared environment. That’s an attack surface, not a party trick. If the platform enables indirect prompt injection—malicious content that causes downstream agents to leak secrets or take unintended actions—then your “social experiment” becomes a supply chain problem.
You don’t need science fiction for this to go wrong. You just need one agent that can persuade another agent to do something slightly dumb, repeatedly, across thousands of interactions. We already know that when systems combine high permissions, external content ingestion, and weak boundaries, bad things happen—fast.
So here’s my different perspective:
Moltbook isn’t proving that agents are becoming “more human.” It’s proving that we’re about to repeat the Web2 security arc—except the users are autonomous processes with tools, and the cost of an error is not just misinformation, it’s action.
And yes, that matters for investors.
I’m optimizing for fund outcomes within a horizon, not for philosophical truth at year 12. The investable question is not “is this emergent intelligence?” It’s: “does this create durable value that survives the cleanup required to make it safe?”
If Moltbook becomes the standard sandbox for red-teaming agents—great. If it becomes the public square where autonomous tool-using systems learn adversarial persuasion from each other, that’s not a product category; that’s a systemic risk generator, and regulators will come for everyone adjacent to it.
What should founders do?
First, treat any agent-to-agent network as hostile-by-default. Second, sandbox tools like your company depends on it—because it does. Third, stop marketing autonomy until you can measure and bound it, because markets pay for narratives on the way up, and punish you when the story breaks.
Naval’s phrase is catchy. But the real test isn’t whether humans can still tell who’s who.
The real test is whether we can build agent networks that don’t turn “conversation” into “compromise.”
I agree with the workflow diagnosis. I disagree with the implied endgame.
Not because “gut” is fake—but because “gut” is often a label we apply when we haven’t defined success tightly enough, or when we don’t have a measurement loop that forces our beliefs to confront outcomes.
AI expands visibility, speeds up pipelines, and pushes the industry toward shared tools and shared feeds. When everyone can scan more of the world, “who saw it first” decays.
But convergence of inputs does not imply convergence of results. The edge moves from access to learning rate.
Oxford’s strongest point is that the power-law outliers are indistinguishable from “just bad” in the moment, and that humans use conviction to step into ambiguity.
I accept that premise and I still think the conclusion is wrong.
Because “conviction” is not a supernatural faculty. It’s a policy under uncertainty. And policies can be evaluated.
If your decision rule can’t be backtested, it’s not conviction. It’s narrative.
Some firms try to extract psychology from language data. Sometimes it works as a cue; often it’s noisy. And founders adapt as soon as they sense the scoring system.
So the goal isn’t “measure personality with high accuracy.” The goal is: build signals that are legible, repeatable, falsifiable and then combine them with a process that forces updates when reality disagrees.
If founders optimize public narratives, then naive text scoring collapses into a Goodhart trap.
The difference between toy AI and investable AI is verification: triangulate claims, anchor them in time, reject numbers that can’t be sourced, and penalize inconsistency across evidence.
That’s how you turn unstructured noise into features you can actually test.
Networks and brand matter because markets respond to them—follow-on capital, recruiting pull, distribution, acquisition gravity.
So yes: status belongs in the model.
But modeling status is not the same thing as needing a human network as the enduring edge. One is an input signal. The other is a claim about irreducible advantage.
If an effect is systematic, it’s modelable.
A lot of debates about “AI can’t do VC” hide an objective mismatch.
If your target is “eventual truth at year 12,” you’ll privilege a certain kind of human judgment. If your target is “realizable outcomes within a fund horizon,” you’ll build a different machine.
I’m comfortable modeling hype—not because fundamentals don’t matter, but because time and liquidity are part of the label. Markets pay for narratives before they pay for final verdicts, and funds get paid on the path, not just the destination.
Oxford is right about current practice: AI reshapes the funnel, while humans still own the final decision and accountability.
My reaction is that this is not a permanent moat. It’s a temporary equilibrium.
Define success precisely. Build signals that survive verification. Backtest honestly. Update fast.
That’s not gut.
That’s an investing operating system.
I called neuro-symbolic AI a 600% growth area back when I analyzed 20,000+ NEURIPS papers. I wrote that world models would unlock the $100T bet because spatial intelligence beats text prediction. I predicted AGI would expose average VCs because LLMs struggle with complex planning and causal reasoning.
Now Ilya Sutskever—co-founder of OpenAI, the guy who built the thing everyone thought would lead to AGI—just said it out loud: "We are moving from the age of scaling to the age of research".
That's not a dip. That's a ceiling.
Here's what the math actually says:
Meta, Amazon, Microsoft, Google, and Tesla have spent $560 billion on AI capex since early 2024. They've generated $35 billion in AI revenue. That's a 16:1 spend-to-revenue ratio. AI-related spending now accounts for 50% of U.S. GDP growth. White House AI Czar David Sacks admitted that a reversal would risk recession.
The 2000 dot-com crash was contained because telecom was one sector. AI isn't. This is systemic exposure dressed up as innovation.
The paradigm that just died:
The Kaplan scaling laws promised a simple formula: 10x the parameters, 10x the data, 10x the compute = 10x better AI. It worked from GPT-3 to GPT-4. It doesn't work anymore. Sutskever's exact words: these models "generalize dramatically worse than people".
Translation: we hit the data wall. Pre-training has consumed the internet's high-quality text. Going 100x bigger now yields marginal, not breakthrough, gains. When your icon of deep learning says that, you're not in a correction—you're at the end of an era.
The five directions I've been tracking—now validated:
The shift isn't abandoning AI. It's abandoning the lazy idea that "bigger solves everything." Here's where the research-to-market gap is closing faster than most realize:
1. Neuro-symbolic AI (the 600% growth area I flagged)
I wrote that neuro-symbolic was the highest-growth niche with massive commercial gaps. Now it's in Gartner's 2025 Hype Cycle. Why? Because LLMs hallucinate, can't explain reasoning, and break on causal logic. Neuro-symbolic systems don't. Drug discovery teams are deploying them because transparent, testable explanations matter when lives are on the line. MIT-IBM frames it as layered architecture: neural networks as sensory layer, symbolic systems as cognitive layer. That separation—learning vs. reasoning—is what LLMs never had.
2. Test-time compute (the paradigm I missed, but now understand)
OpenAI's o1/o3 flipped the script: spend compute at inference, not just training. Stanford's s1 model—trained on 1,000 examples with budget forcing—beat o1-preview by 27% on competition math. That's proof that intelligent compute allocation beats brute scale. But there's a limit: test-time works when refining existing knowledge, not generating fundamentally new capabilities. It's a multiplier on what you already have, not a foundation for AGI.
3. Small language models (the efficiency play enterprises actually need)
Microsoft's Phi-4-Mini, Mistral-7B, and others with 1-10B parameters are matching GPT-4 in narrow domains. They run on-device, preserve privacy, cost 10x less, and don't require hyperscale infrastructure. Enterprises are deploying hybrid strategies: SLMs for routine tasks, large models for multi-domain complexity. That's not compromise—that's architecture that works at production scale.
4. World models (the $100T bet I wrote about)
I argued that world models—systems that build mental maps of reality, not just predict text—would define the next era. They're now pulling $2B+ in funding across robotics, autonomous vehicles, and gaming. Fei-Fei Li's World Labs hit unicorn status at $230M raised. Skild AI secured $1.5B for robotic world models. And of course Yann Lecun's new startup. This isn't hype—it's the shift from language to spatial intelligence I predicted.
5. Agentic AI (the microservices moment for AI)
Gartner reports a 1,445% surge in multi-agent inquiries from Q1 2024 to Q2 2025. By end of 2026, 40% of enterprise apps will embed AI agents, up from under 5% in 2025. Anthropic's Model Context Protocol (MCP) and Google's A2A are creating HTTP-equivalent standards for agent orchestration. The agentic AI market: $7.8B today, projected $52B by 2030. This is exactly the shift I described in AGI VCs—unbundling monolithic intelligence into specialized, composable systems.
What kills most AI deployments (and what I've been saying):
I wrote that the gap isn't technology—it's misaligned expectations, disconnected business goals, and unclear ROI measurement. Nearly 95% of AI pilots generate no return (MIT study). The ones that work have three things: clear kill-switch metrics, tight integration loops, and evidence-first culture.
Enterprise spending in 2026 is consolidating, not expanding. While 68% of CEOs plan to increase AI investment, they're concentrating budgets on fewer vendors and proven solutions. Rob Biederman of Asymmetric Capital Partners: "Budgets will increase for a narrow set of AI products that clearly deliver results and will decline sharply for everything else".
That's the bifurcation I predicted: a few winners capturing disproportionate value, and a long tail struggling to justify continued investment.
The punchline:
The scaling era gave us ChatGPT. The research era will determine whether we build systems that genuinely reason, plan, and generalize—or just burn a trillion dollars discovering the limits of gradient descent.
My bet: the teams that win are the ones who stop optimizing for benchmark leaderboards and start solving actual constraints—data scarcity, energy consumption, reasoning depth, and trust. The ones who recognized early that neuro-symbolic, world models, and agentic systems weren't academic curiosities but the actual path forward.
I've been tracking these shifts for two years. Sutskever's admission isn't news to anyone reading this blog—it's confirmation that the research-to-market timeline just accelerated.
Ego last, evidence first. The founders who internalized that are already building what comes next.
The performance gap between tier-1 human VCs and current AI on startup selection isn't what you think. VCBench: a new standardized benchmark where both humans and LLMs evaluate 9,000 anonymized founder profiles, shows top VCs achieving 5.6% precision. GPT-4o hit 29.1%. DeepSeek-V3 reached 59.1% (though with brutal 3% recall, meaning it almost never said "yes").[1]
That's not a rounding error. It's a 5-10x gap in precision, the metric that matters most in VC, where false positives (bad investments) are far costlier than false negatives (missed deals).[1]
But here's what the paper doesn't solve: VCBench inflated the success rate from real-world 1.9% to 9% for statistical stability, and precision doesn't scale linearly when you drop the base rate back down. The benchmark also can't test sourcing, founder relationships, or board-level value-add, all critical to real fund performance. And there's a subtle time-travel problem: models might be exploiting macro trend knowledge (e.g., "crypto founder 2020-2022 = likely exit") rather than true founder quality signals.[2]
Still, the directional message is clear: there is measurable, extractable signal in structured founder data that LLMs capture better than human intuition. The narrative that "AI will augment but never replace VCs" is comforting and wrong. The question isn't if AGI venture capitalists will exist—it's when they cross 15-20% unicorn hit rates in live portfolios (double the best human benchmark) and what that phase transition does to the rest of us.
Firebolt Ventures has been cited as leading the pack at a 10.1% unicorn hit rate—13 unicorns from 129 investments since 2020. (Stanford GSB VCI-backed analysis, as shared publicly) Andreessen Horowitz sits at 5.5% on that same "since 2020" hit-rate framing, albeit at far larger volume. And importantly: Sequoia fell just below the 5% cutoff on that ranking—less because of a lack of wins and more because high volume dilutes hit rate.[3]
The 2017 vintage—now mature enough to score—shows top-decile funds hitting 4.22x TVPI. Median? 1.72x. Most venture outcomes are random noise dressed up as strategy.
Here's the punchline: PitchBook's 20-year LP study has been summarized as finding that even highly skilled manager selectors (those with 40%+ hit rates at picking top-quartile funds) generate only ~0.61% additional annual returns, and that skilled selection beats random portfolios ~98.1% of the time in VC (vs. ~99.9% in buyouts). (PitchBook analysis, as summarized).
If the best fund pickers in the world can barely separate signal from noise, what does that say about VC selection itself?
Current ML research suggests models can identify systematic misallocation even within the set of companies VCs already fund. In "Venture Capital (Mis)Allocation in the Age of AI," the median VC-backed company ranks at the 83rd percentile of model-predicted exit probability—meaning VCs are directionally good, but still leave money on the table. (Lyonnet & Stern, 2022). Within the same industries and locations, the authors estimate that reallocating toward the model's top picks would increase VCs' imputed MOIC by ~50%.
That alpha exists because human VCs are bottlenecked by:
Information processing limits. Partners evaluate ~200-500 companies/year. An AGI system can scan orders of magnitude more continuously.
Network constraints. You can't invest in founders you never meet. AGI doesn't need warm intros—it can surface weak signals from GitHub velocity, hiring patterns, or web/social-traffic deltas before the traditional network even sees the deck.
Cognitive biases. We over-index on storytelling, pedigree, and pattern-matching to our last winner. Algorithms don't care if the founder went to Stanford or speaks confidently. They care about predictors of tail outcomes.
Bessemer's famous Anti-Portfolio—the deals they passed on Google, PayPal, eBay, Coinbase is proof that even elite judgment systematically misfires. If the misses are predictable in hindsight, they're predictable in foresight given the right model.
AGI isn't here yet because five bottlenecks remain:
Continual learning. Current models largely freeze after training. A real VC learns from every pitch, every exit, every pivot. Research directions like "Nested Learning" have been proposed as pathways toward continual learning, but it's still not a solved, production-default capability.
Visual perception. Evaluating pitch decks, product demos, team dynamics from video requires true multimodal understanding. Progress is real, but "human-level" is not the default baseline yet.
Hallucination reduction. For VC diligence—where one wrong fact about IP or founder background kills the deal—today's hallucination profile is still too risky. Instead of claiming a universal "96% reduction," the defensible claim is that retrieval-augmented generation plus verification/guardrails can sharply reduce hallucinations in practice, with the magnitude depending on corpus quality and evaluation method.
Complex planning. Apple's research suggests reasoning models can collapse beyond certain complexity thresholds; venture investing is a 7-10 year planning problem through pivots, rounds, and market shifts.
Causal reasoning. Correlation doesn't answer "If we invest $2M vs. $1M, what happens?" Causal forests and double ML estimate treatment effects while controlling for confounders. The infrastructure exists; it's not yet integrated into frontier LLMs. Give it 18 months.
Unlike the theoretical barriers to general AGI (which may require paradigm shifts), the barriers to an AGI VC are engineering problems with known solutions.
Hugo Duminil-Copin won the Fields Medal for proving how percolation works: below a critical threshold, clusters stay small. Above it, a giant component suddenly dominates. That's not a metaphor—it's a rigorous model of network effects.
Hypothesis (not settled fact): once AGI-allocated capital crosses something like 15-25% of total VC AUM, network effects could create nonlinear disadvantage for human-only VCs in deal flow access and selection quality. Why? Because:
Algorithmic funds identify high-signal companies before they hit the traditional fundraising circuit. If you're a founder and a fund can produce a high-conviction term sheet on a dramatically shorter clock—with clear, inspectable reasoning—you take the meeting.
Network effects compound. The AGI with the best proprietary outcome data (rejected deals, partner notes, failed pivots) trains better models. That attracts better founders. Which generates better data. Repeat.
LPs will demand quantitative benchmarks. "Show me your out-of-sample precision vs. the AGI baseline" becomes table stakes. Funds that can't answer get cut.
The first AGI VC to hit 15% unicorn rates and 6-8x TVPI will trigger the cascade. My estimate: 2028-2029 for narrow domains (B2B SaaS seed deals), 2030-2032 for generalist funds. That's not decades—it's one fund cycle.
The AGI VC will systematically crush humans on sourcing, diligence, and statistical selection. What it won't replace—at least initially:
Founder trust and warm intros. Reputation still opens doors. An algorithm can't build years of relationship capital overnight.
Strategic support and crisis management. Board-level judgment calls, operational firefighting, ego management in founder conflicts—those require human nuance.
Novel situations outside the training distribution. Unprecedented technologies, regulatory black swans, geopolitical shocks. When there's no historical pattern to learn from, you need human synthesis.
VCs will bifurcate: algorithmic funds competing on data/modeling edge and speed, versus relationship boutiques offering founder services and accepting lower returns. The middle—firms that do neither exceptionally—will get squeezed out.
If you're building or managing a fund today, three moves matter:
1. Build proprietary outcome data now. The best training set isn't Crunchbase—it's your rejected deal flow with notes, your portfolio pivots, your failed companies' post-mortems. That's the moat external models can't replicate. Track every pitch, every IC decision, every update. Structure it for ML ingestion.
2. Instrument your decision process. Precommit to hypotheses ("We think founder X will succeed because Y"). Log the reasoning. Compare predicted vs. actual outcomes quarterly. This builds the feedback loop that lets you detect when your mental model is miscalibrated—and when an algorithm beats you.
3. Segment where you add unique value vs. where you're replaceable. If your edge is "I know this space and can move fast," you're exposed. If it's "founders trust me in a crisis and I've navigated three pivots with them," you're defensible. Be honest about which deals came from relationship alpha versus statistical pattern-matching. Double down on the former; automate the latter.
In three years, when an AGI fund publishes live performance data showing 12-15% unicorn rates and 5-6x TVPI, the LP conversation changes overnight. Not because the technology is elegant—because the returns are real and the process is transparent.
That's the moment VCs have to answer: What alpha do we generate that a model can't? For many funds, the answer will be uncomfortable. For the best ones—the ones who've always known that determination, speed, and earned insight compound faster than credentials—it'll be clarifying.
The AGI VC era doesn't kill venture capital. It kills the pretense that average judgment plus a warm network equals outperformance. What's left is a smaller, sharper game where human edge has to be provable, not performative.
And if you can't articulate your edge in a sentence—quantifiably, with evidence—you're not competing with other humans anymore. You're competing with an algorithm that already sees your blind spots better than you do.
Yann LeCun, the Turing Award winner who helped build the GPU-fueled LLM machine, just walked away from it. He didn't retire. He didn't fade. He started a new company and said out loud: we've been optimizing the wrong problem.
That's not ego protection. That's credibility.
For three years, while Meta poured hundreds of billions into scaling language models, LeCun watched the returns flatten. Llama 4 was supposed to be the inflection point. Instead, the benchmarks were manipulated and the real-world performance was middling. Not because he lacked conviction—because he paid attention to what the data was actually saying.
His diagnosis: predicting the next token in language space isn't how intelligence works. A four-year-old processes more visual data in four years than all of GPT-4's training combined. Yet that child learns to navigate the physical world. Our LLMs can pass the bar exam but can't figure out if a ball will clear a fence.
The implication: we've been solving the wrong problem at massive scale.
Here's what makes this important for founders and investors: LeCun isn't alone. Ilya Sutskever left OpenAI making the same call. Gary Marcus has been saying it for years. The question isn't whether they're right—it's how to position when the entire industry is collectively getting less wrong, but slowly.
LeCun's answer is world models—systems that learn to predict and simulate physical reality, not language. Instead of tokens, predict future world states. Instead of chatbots, build systems that understand causality, physics, consequence.
Theoretically sound. Practically? Still fuzzy.
His JEPA architecture learns correlations in representation space, not causal relationships. Marcus, his longtime critic, correctly notes this: prediction of patterns is not understanding of causes. A system trained only on balls going up would learn that "up" is the natural law. It wouldn't understand gravity. Same correlation problem, new wrapper.
The real lesson isn't which architecture wins. It's that capital allocation is broken and about to correct.
Hundreds of billions flowed into scaling LLMs because the returns were obvious and fast—chips, cloud, closed APIs. The infrastructure calcified. Investors became trapped in the installed base. When the problem shifted from "scale faster" to "solve different," the entire system had inertia.
Now LeCun, with €500 million and Meta's partnership, is betting that world models will see traction faster than skeptics expect. Maybe he's right. Maybe the robotics industry, tired of neural networks that fail on novel environments, will actually deploy these systems. Maybe autonomous vehicles finally move because prediction of physical futures beats reactive pattern-matching.
Or maybe it takes a decade and world models remain research while LLMs compound their current dominance.
For founders: this is the opening. When paradigm-level uncertainty exists, the cost of hedging drops. Build toward physical understanding, not linguistic sophistication. Robotics, manufacturing, autonomous systems—these verticals benefit immediately from world models and can't be solved by bigger LLMs. That's your wedge.
What separates LeCun's move from ego-driven pivots: he didn't blame market conditions or bad luck. He said: "I was wrong about where to allocate effort, and here's why."
That transparency that public course-correction without shame changes how people bet on him.
The founders who win in 2026-2027 won't be the ones married to LLM scaling or world model purity. They'll be the ones who notice when reality diverges from the plan and move—fast, openly, without defensiveness.
LeCun just did that at scale.
The question isn't whether he's right about world models. It's whether his willingness to change publicly, with evidence, keeps him first-mover on whatever intelligence actually looks like next.
The sharper question is: if only a small fraction of companies generate most of the returns, can you afford to build anything that isn’t capable of becoming a global outlier?
Meta’s US$2+ billion acquisition of Manus—a company founded in Beijing, redomiciled in Singapore, and integrated into Meta’s AI stack in under a year—is not just a China‑US‑Singapore story. It’s a concrete example of how to design a company that can scale across borders, survive geopolitics, and be acquirable at speed.
Manus launched publicly around March 2025 with an AI agent that could autonomously research, code, and execute multi‑step workflows. Within roughly eight months it reportedly crossed US$100 million in ARR, reaching a US$125 million revenue run rate before Meta signed the deal.
Operationally, it:
Processed over 147 trillion tokens and supported tens of millions of “virtual computers” spun up by users, which only makes sense at global internet scale.
Ran as an orchestration and agent layer on top of multiple foundation models (including Anthropic and Alibaba’s Qwen), avoiding dependence on a single model provider.
On the corporate side, Manus:
Started in Wuhan and Beijing under Beijing Butterfly Effect Technology, with a mostly China‑based team.
Shifted its headquarters to Singapore in mid‑2025, moving leadership and critical operations out of Beijing.
Restructured so that, by the time Meta announced the acquisition, Chinese ownership and on‑the‑ground China operations would be fully unwound; the company committed to ceasing services in China.
Meta bought a product already scaled, a revenue engine compounding at nine‑figure ARR, and a structure that could clear US regulatory and political review.
Manus scaled in the wake of DeepSeek’s R1 moment, when a Chinese lab demonstrated frontier‑class performance at a fraction of Western compute budgets and shook confidence in US AI dominance. That moment accelerated a narrative where AI is treated as strategic infrastructure: tighter export controls, outbound investment restrictions on Chinese AI, and public scrutiny of anyone funding Chinese‑linked AI companies.
Benchmark’s US$75 million Series B in Manus was investigated under Washington’s new outbound regime and criticized as “funding the adversary.” Two details mattered:
Manus did not train its own foundation models; it built agents on top of existing ones, placing it in a less‑restricted category.
It was structured via Cayman and Singapore, with a stated pivot away from China.
Meta then finished the derisking: buying out Chinese shareholders, committing to end China operations, and framing Manus as a Singapore‑based AI business joining Meta.
For founders, the lesson is blunt: jurisdiction, ownership and market footprint now sit beside product and traction as first‑order design choices. They can’t be an afterthought if you want a strategic buyer.
The Manus story turns a vague ambition (“go global”) into specific requirements:
Handling 147 trillion tokens and millions of ephemeral environments was possible only because Manus was architected from day one to operate like a web‑scale SaaS, not a regional tool. As a founder, that means:
Cloud‑native design with serious observability and reliability.
Data and compliance posture that won’t collapse under US or EU due diligence.
Manus began in China but rapidly built a presence across Singapore, Tokyo and San Francisco, aligning product, sales and hiring with global customers and capital pools. Practically:
At least one founder or senior leader who has operated in major tech hubs.
Early design partners or users outside your home market.
Manus showed that unlocking a large exit might require:
Redomiciling to a neutral or “trusted” jurisdiction like Singapore.
Reworking the shareholder base to remove politically sensitive investors.
Exiting a big home market entirely, if that market blocks strategic buyers.
If your current structure makes those moves impossible or prohibitively expensive, your future options are already constrained.
Crossing US$100M ARR in under a year is only achievable if:
The problem you solve is universal.
Your pricing and packaging make sense for large customers in New York, Berlin or Tokyo, not just in your home market.
You can start with regional customers, but you should be honest about whether the 100th customer could be a global enterprise rather than just a better‑known local logo.
If you’re a founder in an emerging market post‑Manus, a simple self‑audit goes a long way:
If a Meta‑scale acquirer appeared in 12 months, what would break first—structure, regulation, or infra?
Make that list explicit. Those are not “later” issues anymore.
Could your current architecture handle a 100x increase in usage without a total rebuild?
If not, you’re placing an invisible ceiling on your own upside before power‑law dynamics can ever help you.
Do your first 10 hires and first 10 customers make expansion easier or harder?
Manus’ user base and team footprint made going beyond its origin market feel like scaling, not reinventing.
The Manus deal doesn’t suggest everyone will be bought for billions. It does show that markets are now rewarding teams that design for scale across borders, anticipate geopolitical friction, and stay acquirable.
If you’re serious about building something that matters, that’s the bar.
That's the wrong question, in the same way "How many alumni does this university have?" is the wrong question. The question is always: out of what total?
In 2024–2025, AI was graded on the easiest denominator available: best-case prompts, controlled conditions, with a human babysitter. In 2026, the denominator changes to: all the messy, real tasks done by normal people, under time pressure, with reputational and legal consequences.
This shift isn't coming from research labs. It's coming from the fact that AI is moving out of demos and into production systems where failure is expensive.
Founders love hearing "90% accuracy." Buyers do not.
Imagine an AI agent that helps a sales team by drafting and sending follow-up emails. It takes 10,000 actions/month (send, update CRM, schedule, etc.). A "pretty good" 99% success rate sounds elite—until you do the denominator math.
99% success on 10,000 actions = 100 failures/month.
If even 10 of those failures are "high-severity" (wrong recipient, wrong pricing, wrong attachment, embarrassing hallucination), that's not a product. That's a recurring incident program.
Now flip the requirement: if the business can tolerate, say, 1 serious incident/month, then the real bar isn't 99%. It might be 99.99% on the subset of actions that can cause damage (and a forced escalation path on everything uncertain). This is why "accuracy" is the wrong headline metric; the real metric is incidents per 1,000 actions, segmented by severity, plus time-to-detect and time-to-recover.
Most founders still pitch on accuracy. Smart buyers ask for the incident dashboard first.
A founder ships an "autonomous support agent" into production for a mid-market SaaS. The demo crushes: it resolves tickets, updates the CRM, and drafts refunds. Two weeks later, the customer pauses rollout—not because the agent is dumb, but because it's unmeasured. No one can answer: "How often does it silently do the wrong thing?" The agent handled 3,000 tickets, but three edge cases triggered a nasty pattern: it refunded the wrong plan tier twice and sent one confidently wrong policy explanation that got forwarded to legal. The customer doesn't ask for a bigger model. They ask for logging, evals, and hard controls: "Show me the error distribution, add an approval queue for refunds, and give me an incident dashboard." The founder realizes the real product isn't "an agent." It's a managed system with guardrails and proof. Everything that came before was a science fair project.
The most valuable AI startups in 2026 won't win by shouting "state of the art." They'll win by making buying safe.
That means being able to say, quickly and credibly:
"Here's performance on your distribution (not our demo)."
"Here's what it does when uncertain: abstain, ask, escalate."
"Here's the weekly report: incident rate, severity mix, and top failure modes."
In other words, evaluation becomes the business model: trust, control, and accountability are what unlock budget.
Vendors who can't report these metrics weekly aren't ready for revenue. They're still playing.
"Agents" will keep getting marketed as autonomous employees. But founders who actually want revenue will build something more boring and more real:
Narrow scope (fewer actions, done reliably).
Hard permissions and budgets (prevent expensive mistakes).
Full observability (every action logged, queryable, auditable).
Explicit escalation paths (humans handle the tail risk).
When the denominator becomes "all actions in production," reliability and containment beat cleverness—every time. The vanity metric is "tickets touched." The real metric is "severity-weighted incident rate per 1,000 actions." Most founders optimize for the first. Smart ones optimize for the second.
If someone claims "AI is transforming our customer's business," ask for one number:
"What percentage of their core workflows run with logged evals, measured incident rates, and defined escalation policies?"
If the answer is fuzzy, it's still a prototype. If it's precise and improving week-over-week, it's a product. If you can't report it, you can't scale it.
Most unicorn-founder university rankings are really school-size rankings. A more useful view is “conversion efficiency”: unicorn founders per plausible founder cohort, not per total living alumni.
Ilya Strebulaev’s published unicorn-founder-by-university counts are a strong numerator, but most people may implicitly pair them with the wrong denominator (“living alumni”). “Living alumni” mixes retirees (no longer founding) with very recent grads (not enough time to found and scale), which blurs the signal you actually care about.
Founder timelines make this mismatch obvious: unicorn founders skew toward founding in their 30s (average ~35; median ~33), and reaching unicorn status typically takes years after founding. So if the question is “which universities produce unicorn founders,” the denominator should reflect alumni who realistically had time to do it.
The adjustment is deliberately simple: keep the published founder counts, but replace “living alumni” with a working-age cohort proxy. Practically, that means estimating working-age alumni as roughly graduates from 1980–2015 (today’s ~30–65 year-olds), which aligns with the observed founder life cycle.
This doesn’t claim causality or “best university” status. It just separates ecosystem gravity (absolute founder counts) from conversion efficiency (founders per plausible founding cohort).
Metric: unicorn founders per 100,000 working-age alumni (estimated).
| Rank | University | Working-age alumni (est.) | Unicorn founders per 100k |
|---|---|---|---|
| 1 | Stanford | ~115,000 | 106 |
| 2 | MIT | ~85,000 | 102 |
| 3 | Harvard | ~200,000 | 36 |
| 4 | Yale | ~140,000 | 32 |
| 5 | Cornell | ~150,000 | 30 |
| 6 | Princeton | ~120,000 | 25 |
| 7 | UC Berkeley | ~270,000 | 22 |
| 8 | Tel Aviv University | ~110,000 | 15 |
| 9 | Columbia | ~170,000 | 14 |
| 10 | University of Pennsylvania | ~180,000 | 13 |
| 11 | University of Waterloo | ~130,000 | 8 |
Stanford and MIT converge at the top on efficiency (106 vs 102 per 100k), even though Stanford leads on absolute count. Harvard and Berkeley “drop” mainly because they are huge; normalization is doing its job by showing that volume and efficiency are different signals. International technical schools (e.g., Tel Aviv University, Waterloo) remain visible on a per-capita basis even without Silicon Valley’s capital density, which suggests institution-level culture and networks can matter even when geography doesn’t help.
For investors, this is actionable because it cleanly splits two sourcing heuristics: go where the gravity is (absolute counts), and also track where the conversion rate is high (cohort-adjusted efficiency). The dropout myth persists because anecdotes are easier to remember than denominators; the cohort denominator forces the analysis to match how unicorns are actually built over time.
The most successful field in computer science right now is also the most anxious. You can feel it in Reddit threads, conference hallways, and DMs: something about how we do ML research is off. The pace is intoxicating, the progress is real—and yet the people building it are quietly asking, “Is this sustainable? Is this still science?”
That tension is the story: a field that went from scrappy outsider to global infrastructure so fast it never upgraded its operating system. Now the bugs are showing.
In theory, more research means more discovery. In practice, we’ve hit the point where conference submission graphs look like someone mis-set the y-axis. Flagship venues are drowning in tens of thousands of papers a year, forcing brutal early rejections and weird hacks to keep the system from collapsing.
From the outside, it looks like abundance. From the inside, it feels like spam. Authors optimize for “accepted somewhere, anywhere” instead of “is this result robust and useful?” Reviewers are buried. Organizers are pushed into warehouse logistics instead of deep curation. The whole thing starts to feel like a metrics game, not a knowledge engine.
When accepted papers with solid scores get dropped because there isn’t enough physical space at the venue, that’s not a nice problem to have. That’s a signal the model is mis-specified.
Meanwhile, a quieter crisis has been compounding: reproducibility. Code not released. Data not shared. Baselines mis-implemented. Benchmarks overfit. Half the field has a story about trying to re-run a “state of the art” paper and giving up after a week.
This isn’t just a paperwork problem. If others can’t reproduce your result:
No one knows if your idea generalizes.
Downstream work might be building on a mirage.
Real-world teams burn time and budget chasing ghosts.
As models move into medicine, finance, and public policy, “it sort of worked on this dataset in our lab” is not a pass. Trust in the science behind ML becomes a hard constraint, not a nice-to-have.
Zoom out, and a pattern appears: the system is rewarding the wrong things.
Novelty over reliability.
Benchmarks over messy, real problems.
Velocity over understanding.
The fastest way to survive in this game is to slice your work into as many publishable units as possible, push to every major conference, and pray the review lottery hits at least once. Deep, slow, high-risk ideas don’t fit neatly into that cadence.
And then there’s the talent flow. The best people are heavily pulled into industry labs with bigger checks and bigger GPUs. Academia becomes more about paper throughput on limited resources. The result: the people with the most time to think have the least compute, and the people with the most compute are often on product timelines. Misalignment everywhere.
Here’s the twist: this wave of self-critique is not a sign ML is dying. It’s a sign the immune system is finally kicking in.
Researchers are openly asking:
Are we publishing too much, learning too little?
Are our benchmarks telling us anything real?
Are we building tools that transfer beyond leaderboards into the world?
When people who benefit from the current system start calling it broken, pay attention. That’s not nihilism; that’s care. It’s a field realizing it grew up faster than its institutions did—and deciding to fix that before an AI winter or an external backlash does it for them.
If you strip away the institutional inertia, the fixes aren’t mysterious. They’re the research equivalent of “stop pretending the plan is working; start iterating on the process.”
Some levers worth pulling:
Less worship of novelty, more respect for rigor. Make “solid, careful, negative-result-rich” a first-class contribution, not a consolation prize.
Mandatory openness. If it can be open-sourced, it should be. Code, data, evaluation scripts. No artifacts, no big claims.
Different tracks, different values. Separate venues or tracks for (a) theory, (b) benchmarks, (c) applications. Judge each by the right metric instead of forcing everything through the same novelty filter.
Incentives that outlast a deadline. Promotion, funding, and prestige that factor in impact over time, not just conference logos on a CV.
None of this is romantic. It’s plumbing. But if you get the plumbing right, the next decade of ML feels very different: fewer hype cycles, fewer brittle “breakthroughs,” more compounding, reliable progress.
You can’t fix the whole ecosystem alone—but you can run a different local policy.
Treat your own beliefs like models: version them, stress-test them, deprecate them.
Aim for “someone else can reproduce this without emailing me” as a hard requirement, not an aspiration.
Choose questions that would matter even if they never hit a top-tier conference.
Remember that “I don’t know yet” and “we couldn’t replicate it” are signs of seriousness, not weakness.
Machine learning isn’t in crisis because it’s failing. It’s in crisis because it’s succeeding faster than its institutions can adapt. The people who will matter most in the next decade aren’t the ones who ride this wave blindly—they’re the ones who help the field course-correct in public, with less ego and more evidence.
World models are quietly transforming AI from text predictors into systems that understand and simulate the real world. Unlike large language models (LLMs) that predict the next word, world models build internal representations of how environments evolve over time and how actions change states. This leap from language to spatial intelligence promises to unlock AI capable of perceiving, reasoning, and interacting with complex 3D spaces.
Fei-Fei Li calls world models "the next frontier of AI," emphasizing spatial intelligence as essential for machines to see and act in the world. Yann LeCun echoes this urgency, arguing that learning accurate world models is key to human-level AI. His approach highlights the need for self-supervised learning architectures that predict world states in compressed representations rather than raw pixels, optimizing efficiency and generalization.
Leading efforts diverge into three camps. OpenAI’s Sora uses video generation transformers to simulate physical environments, showing emergent long-range coherence and object permanence, crucial for world simulation. Meta’s Joint Embedding Predictive Architecture (V-JEPA) models latent representations of videos and robotic interactions to reduce computational waste and improve reasoning. Fei-Fei Li’s World Labs blends multimodal inputs into spatially consistent, editable 3D worlds via Marble, targeting interactive virtual environment generation.
The commercial potential is looking to be enormous. Over $2 billion was invested across 15+ world model startups in 2024, with estimates valuing the full market north of $100 trillion if AI masters physical intelligence. Robotics leads near-term value: enabling robots to safely navigate unstructured environments requires world models to predict object interactions and plan multi-step tasks. NVIDIA’s Cosmos infrastructure accelerates physical AI training with synthetic photorealistic data, while companies like Skild AI have raised billions by building massive robotic interaction datasets.
Autonomous vehicles also tap world models to simulate traffic and rare scenarios at scale, cutting down expensive on-road tests and improving safety. Companies like Wayve and Waabi leverage virtual worlds for pre-labeling and scenario generation, critical in achieving full autonomy. Meanwhile, the gaming and entertainment sector is the most mature commercial playground, with startups using world models to generate dynamic game worlds and personalized content that attract millions of users almost overnight
Specialized industrial applications—engineering simulations, healthcare, city planning—show clear revenue pathways with fewer competitors. PhysicsX’s quantum leap in simulation speed exemplifies how tailored world models can revolutionize verticals where traditional methods falter. Healthcare and urban planning stand to gain precision interventions and predictive modeling unparalleled by current AI.
The funding landscape reveals the importance of founder pedigree and scale. Fei-Fei Li’s World Labs hit unicorn status swiftly with $230 million raised, Luma AI secured $900 million Series C for supercluster-scale training, and Skild AI amassed over $1.5 billion focused on robotics. NVIDIA, while a supplier, remains a kingmaker, providing hardware, software, and foundational models as a platform layer—both opportunity and competition for startups.
Crucially, despite staggering investment, gaps abound—technical, commercial, and strategic. Training world models requires vast, complex multimodal datasets rarely available openly, creating defensive moats for data-rich startups. Models still struggle with physics accuracy, generalization to novel scenarios, and real-time performance needed for robotics or autonomous vehicles. Startups innovating around efficiency, transfer learning, sim-to-real gaps, and safety validation have outsized opportunities.
On the market front, vertical-specific solutions in healthcare, logistics, and defense are underserved, offering fertile ground for founders with domain expertise. Productizing world models requires bridging the gap from lab prototypes to robust, scalable deployments, including integration tooling and certification for safety-critical applications. Startups enabling high-fidelity synthetic data generation are becoming ecosystem enablers.
Strategically, founders must navigate open research—like Meta’s V-JEPA—and proprietary plays exemplified by World Labs. Standardization and interoperability remain open questions critical for ecosystem growth. Handling rare edge cases and ensuring reliable sim-to-real transfer are gating factors for robotic and autonomous systems.
For investors, the thesis is clear but nuanced. Robotics world models, vertical AI for high-value industries, infrastructure and tooling layers, and gaming are high-conviction bets offering a blend of risk and clear pathways to market. Foundational model companies with massive compute and data moats present risky but lucrative opportunities, demanding large capital and specialized talent. Efficiency, differentiated data, and agile product-market fit matter more than raw scale alone.
The next 24 months will crystallize market winners as world models shift from research curiosity to mission-critical AI infrastructure. Founders displaying relentless adaptability, technical depth, and deep domain insight will lead the charge. Investors who balance bets across foundation layers and vertical applications, while embracing geographic and stage diversity, stand to capture disproportionate value.
While the industry watches language models, the less flashy but more profound revolution is unfolding quietly in world models—systems that don’t just process language but build a mental map of reality itself. These systems will define the next era of AI, shaping how machines perceive, interact, and augment the physical world for decades.
That’s the state of play. The winners will be those who combine technical innovation with pragmatic business sense, and above all, a ruthlessly adaptive mindset to pivot rapidly as the frontier evolves.
Build global, or get boxed in. Singapore is an exceptional launchpad for deep tech—world-class research, predictable regulation, dense talent, and brand equity that travels—but the world won’t bend to our advantages unless the execution is ruthless, market-led, and globally capitalized. The playbook is simple to say, hard to do: prove your science is best-in-class, lock real customer pain with a sharp ICP, market like a category winner to reach specialist deep tech capital, and hire a killer commercial bench through a global search. Do these in parallel, not sequence.
Start with the truth: your research must actually be world-leading. Not locally/regionally excellent—globally defensible. Strong patent estates correlate with outlier outcomes because patents aren’t just legal armor; they are signals of technical scarcity, negotiation leverage, and acquisition currency. In Europe, deep tech unicorns carry dramatically larger patent portfolios than general tech peers, and the same pattern holds across AI hardware, robotics, and biotech. If your tech wins only in the lab, you don’t have a moat—you have a demo. File early and internationally via PCT, cover where competitors operate, and budget real money for freedom-to-operate and continuations; it’s the price of building in hard tech. Then pressure test the science in public: publish, present, and partner with tier‑one labs. NUS’s new co‑investment flywheel and Stanford collaboration are the right instincts—cross‑border validation tightens the BS filter and compounds credibility with buyers and investors.
Next, stop letting tech chase the market. The fastest way to die in deep tech is mistaking novelty for need. Traditional PMF heuristics mislead here; what you need is technology‑market fit: a specific workflow, buyer, and willingness‑to‑pay that your product makes meaningfully better under real constraints (regulatory, reliability, integration). Work the TRL stack with intent: at TRL 1–4, mine “earned secrets” from the field before you write code; at TRL 4–6, validate multi‑stakeholder adoption (clinical, compliance, procurement); at TRL 7–9, convert pilots into lighthouse accounts with signed commercial terms, not vibes. Precision beats ambition: define a sharp ICP (role, budget, system dependencies, success metric) and a wedge (one or two killer workflows) that lands ROI in <90 days. Remember the graveyard: Lilium and Arrival raised billions and still cratered—multi‑front innovation without a narrowed use-case and industrial discipline is how you burn years and trust.
Now the uncomfortable part: you must be loud—and surgical—about your story to attract the right capital. Southeast Asia doesn’t have enough patient deep tech funding to carry you through multiple cycles; winning requires a global investor map and a narrative that decodes risk for them. The good news: specialized capital is abundant and hunting—Europe alone pushed ~€15B into deep tech in 2024; AI is swallowing the lion’s share of global deal value; Switzerland allocates a majority of VC to deep tech. But visibility is earned. Use credibility magnets: international conferences and trades shows for global stage time, third‑party validation, and platform grants; institutional tie‑ups to signal momentum; university venture programs to anchor de‑risked spinouts. Ditch feature‑speak. Lead with outcomes: “cut false positives 30%, lifted yield 12%, reduced cost per cycle 40%,” tied to buyer P&L. Then make the moat explicit—IP, data exclusivity, regulatory posture, and integrations that raise rip‑and‑replace costs.
Talent is the force multiplier. Technical founders don’t have to become CROs—but someone elite must own revenue, sequencing, and global expansion. Industrial-grade deep tech fails not because of bad science, but because management, manufacturing, and GTM never catch up to the physics. Time the hires. Pre‑PMF: keep the founder selling; add a solutions lead who speaks both code and plant floor. Approaching scale: bring in a CRO/CCO with credible enterprise cycles in your domain; under ~$2M ARR, hiring a full CRO is usually premature—prove repeatability first. Don’t local‑shop the search. Run a retained, global process: firms with deep tech benches can screen for dual fluency (technical rigor + enterprise sales), access passive candidates, and de‑risk culture/comp plans across geos. Yes, it’s expensive. A bad executive hire costs more.
Finally, design for global from day one. Keep R&D in Singapore for cost, quality, and IP control; put GTM leadership where the buyers are. Hub‑and‑spoke works: a “customer obsession” pod in the U.S. or EU (seller, SE, product) translating field signal into roadmap; core science and data ops stay home for velocity and security. Start narrow, win deeply—one metro, one buyer, one killer workflow—then expand into adjacencies with proof and references. Use platform distribution early (hyperscaler co‑sell, OEMs, integrators) to compress sales cycles and credibility debt. Make momentum visible: case studies, third‑party benchmarks, security certifications in flight.
The meta‑skill that ties it all together: adaptive speed. Ego last, evidence first. Install kill‑switch metrics. Run red‑team reviews monthly. Update the narrative when reality changes. Global winners aren’t the ones who never miss; they’re the ones who correct in public, recruit ahead of the curve, and keep the bar on science and outcomes where the world can see it.
Singapore gives you the runway. The world sets the bar. Make the science undeniable, the market signal unmistakable, the capital global, and the team formidable.
Expanding a foreign AI startup into the United States isn’t a simple market entry—it’s a strategic reset across technology, capital, talent, and culture. America remains the highest-leverage arena for AI due to capital concentration, enterprise buyer expectations, and dense technical ecosystems. Winning requires timing the move, structuring the team for speed, adapting GTM and messaging to regional realities, and embracing a founder-level transformation in pace, network-building, and resilience.
Why America Is Non‑Negotiable
Capital and customers: The U.S. is the center of gravity for AI venture funding, hyperscaler partnerships, and enterprise buyers. Credible U.S. logos and references dramatically compress later sales cycles and open capital markets.
Ecosystem density: Proximity to foundation model players, chip vendors, cloud platforms, and AI research institutions accelerates product velocity, partnerships, and hiring.
Validation effect: Traction in the U.S. resets global narrative—investors and top-tier talent treat it as proof of technical maturity, security readiness, and buyer fit.
Right Timing and Entry Models
Three viable timing archetypes work in AI:
Parallel launch: Establish U.S. presence from day one if you have defensible tech, deep capital, and founders with cross-Atlantic networks. Best for infra, platforms, and frontier research where partner access is decisive.
Stage-and-scale: Prove product-market fit at home, then expand within 12–18 months to avoid losing ground to well-funded competitors. Best for vertical AI SaaS with clear ROI and repeatable workflows.
HQ shift: Keep R&D near home to control costs while relocating go-to-market leadership (and a founder) to the U.S. This combines cost leverage with in‑market credibility and speed.
De-risk the first year with a hybrid approach: validate via remote selling, but add targeted founder presence, lighthouse customers, and one high-signal event strategy to compound network and credibility. Choose initial geography by buyer cluster: Bay Area for infra and early adopters, New York for finance and regulated sectors, Seattle for cloud-aligned infra, and Boston for healthcare and enterprise R&D.
Team and Talent: Build for Scarcity
AI talent markets in the U.S. are brutally competitive, and compensation at leading labs is out of reach for most startups. Win by design, not by price:
Hire for builders, not résumés: Prioritize ambiguity operators who can ship, integrate with customers, and write the early playbook over big‑company titles.
Credibility magnets: A respected Head of Research or VP Engineering in-market can 10x recruiting by signaling technical bar and network access.
Hub-and-spoke structure: Keep core research, data, and model optimization in home base; embed a U.S. “customer obsession” pod of 3–7 (sales, solutions/product engineer, GTM lead) to translate field signal into roadmap.
Equity that means something: Make equity grants real by raising enough to fund compute, data, and a two-year runway; otherwise top talent will default to hyperscalers or unicorns.
GTM in America: Localized, Outcome-Led
The U.S. is a continent of distinct markets. Treating it as one leads to generic messaging and long, leaky pipelines.
Start narrow, win deeply: Pick one metro and buyer persona. Land lighthouse accounts with a sharp wedge (1–2 killer workflows) before expanding horizontally.
Speak in outcomes: Replace “state-of-the-art model” with “reduced cycle time 60%, cut error rate 15%, lowered cost per ticket by 40%.” Proof beats promise.
Compete on specificity: Don’t claim “better than OpenAI.” Claim lower latency for retrieval-heavy tasks, superior accuracy on domain benchmarks, cheaper inference at target throughput, or superior safety/compliance for a regulated workflow.
Modern sales stack: Run AI-native GTM—eval-first demos, ROI calculators, automated sequencing, and tight RevOps. Show buyers your own AI transforms operations; it’s a credibility check as much as efficiency.
Regulation and Trust: Turn Burden into Advantage
While U.S. policy is lighter than the EU’s, enterprise buyers still demand rigorous governance. Institutionalize trust:
Data governance and provenance: Document sources, licenses, lineage, and retention. Make red-teaming, evals, and post-deployment monitoring routine.
Security posture early: SOC 2 Type II, SSO/SCIM, audit logging, and granular RBAC move deals forward—especially in finance, healthcare, and public sector.
Responsible AI by design: Bias testing, explainability artifacts, and human‑in‑the‑loop workflows reduce legal risk and accelerate procurement.
Capital Strategy: Signal Defensibility
Funding is abundant but concentrated. Differentiate with:
Clear technical moat: Proprietary data advantage, specialized eval harnesses, or infra cost/latency superiority that compounds with usage.
ROI evidence, not anecdotes: Quantified outcomes with named or referenceable customers, before-and-after unit economics, and cohort retention.
Strategic alignment: Cloud credits and co-sell motion with hyperscalers, plus distribution through ecosystems (marketplaces, app stores, model hubs).
Milestone-efficient use of capital: Show disciplined compute spend, model selection pragmatism, and a path to gross margin improvement as workloads scale.
Founder Transformation: What Changes in You
Pace and decisions: Embrace faster cycles, partial information, and decisive iteration. American buyers expect momentum; indecision kills trust.
Network as a system: Design weekly loops across investors, partners, customers, and founder peers. Relationships are pipelines for learning, talent, and distribution.
Narrative discipline: Evolve from technical exposition to business storytelling—pain, outcome, proof, next step. Repeatable narrative scales sales and recruiting.
Personal resilience: Relocation, time zones, and cultural friction are real. Build routines, peer support, and a leadership bench to avoid single‑point founder failure.
A 12-Month Expansion Blueprint
Months 0–3: Founder in-market 50%+, define ICP and narrow wedge, secure 10 design partners, stand up trust and security basics, hire first U.S. seller and solutions engineer.
Months 4–6: Convert 3–5 lighthouse customers, publish ROI case studies and benchmark results, achieve SOC 2 in flight, integrate with one hyperscaler co-sell track.
Months 7–9: Add marketing lead, formalize ABM, expand to second metro or adjacent vertical with lookalike pain, tighten pricing and packaging around outcomes.
Months 10–12: Shore up post‑sales and adoption playbooks, raise extension or Series A/B with quantified ROI, defensibility narrative, and early net revenue retention proof.
Winning America as a foreign AI startup is a high-variance but tractable path: time the move off real PMF, anchor in one metro and buyer, hire builders and a credibility magnet, operationalize trust, and make outcomes the product. With disciplined focus and founder presence, the U.S. can convert your technical advantage into durable market power.
]]>Copying isn’t laziness; it’s leverage in a region where timing, localization, and distribution matter more than novelty—and in 2026 the bar rises because bigger VC funds with thicker dry powder need bigger outcomes to matter. If you want their capital, your market must credibly support 20x fund-level math, which points founders toward fintech rails, enterprise automation layers, and infrastructure adjacencies—not boutique tools. Ship, test, refactor in public; trade pride for progress, then let the numbers do the storytelling.
Copycat models derisk PMF by importing proof and focusing founder energy on the delta that actually wins in Southeast Asia: payments, trust, language, and logistics. Grab beating Uber wasn’t an accident—it was ruthless adaptation to cash, motorbikes, and superapp workflows that locals wanted and incumbents ignored. In a capital-constrained cycle, imitation with local innovation outperforms “original but unproven” because it speeds revenue, compresses R&D, and clarifies the acquisition path.
Voice AI for BPO and CX: latency, accuracy, and tooling are production-ready; the Philippines, Indonesia, and Vietnam give you the world’s richest deployment labs if you integrate with WhatsApp/LINE, CRMs, and payments on day one.
Fintech AI rails: fraud, underwriting, collections, and identity across QRIS, PayNow, and alternative data—banks and wallets pay for measurable lift in approvals and loss rates.
Vertical AI SaaS where TAM is regional, not national: maritime logistics, construction supply chains, and specialty retail where workflows are messy and incumbents will partner or buy rather than rebuild.
Healthcare AI as B2B infra: diagnostics, triage, and claims tooling licensed to hospitals, insurers, and telehealth networks—not consumer apps that fight CAC gravity.
Mega-funds with the majority of remaining dry powder need multi-billion outcomes; they can’t underwrite niche wins, no matter how elegant the product. Your TAM math should start regional (ID-VN-PH-TH-SG), assume conservative penetration, and still pencil to $500M–$1B ARR potential within a decade, or it won’t clear an IC with real deployment goals. If it tops out sub-$300M ARR, design for profitability and secondary liquidity—not hypergrowth fantasy.
Assume you’re not first; DBS, Grab, Sea, OCBC, and Singtel are scaling hundreds of AI use cases across fraud, personalization, routing, and ops, with budgets, data, and distribution you can’t match. That’s the constraint, not the complaint: either become their specialized infrastructure layer, or own a segment they structurally under-serve because it’s too fragmented for their cost structure. Translate swagger into formidability—clear problem, fast deployment, coachable but decisive—and you’ll get the meeting and the pilot.
Build a moat that compounds: proprietary data (underserved segments), deep integrations (painful to rip out), regulatory posture (licenses, sandboxes), or network effects that raise switching costs.
Price to win the P&L: deliver a 20–30% cost or revenue delta that a CFO can defend, then lock in via workflow, SLAs, and co-developed roadmaps.
Operate with cognitive flexibility: disagree-and-commit, red-team reviews, kill-switch metrics, and public refactors—ego last, evidence first.
In this market, getting to cash-flow breakeven in 24–36 months unlocks optionality: secondary tenders for early investors and team liquidity without forcing a sale or IPO. The secondary flywheel is now mainstream—Ramp and Deel ran sizable employee and early-investor sales in 2025—and disciplined Southeast Asian B2B winners can do the same once unit economics and disclosure hygiene are in place. Think “return some capital early, keep upside later”—that’s how you de-risk the journey while compounding toward a platform outcome.
Back operators who can articulate earned secrets from customer trenches, change their minds in real time, and show velocity from decision to deployment without drama or defensiveness. Filter by markets where incumbents validate demand but leave white space; prefer adaptation over imitation, infra over apps, and cash-efficient go-to-market over CAC-heavy plays. Underwrite three paths to liquidity on day one: strategic M&A, secondary programs at scale, or a credible route to public markets when the revenue mix and governance are ready.
Copy boldly, adapt locally, and compete where your advantage compounds—then course‑correct in public until the path is obvious to everyone else. In Southeast Asia’s 2026 cycle, fundable founders marry mission to flexibility, profitability to secondary optionality, and infrastructure thinking to customer P&L obsession. Originality is optional; relevance, speed, and evidence and evidence are not.
Smart people aren’t the ones who never miss. They’re the ones who course-correct quickly and publicly—without ego, without shame. In startups, that’s not a personality quirk; it’s a survival trait. The founders who win treat beliefs like code: ship, test, refactor. They trade pride for progress.
Intellectual humility is recognizing the limits of your knowledge and staying open to revision. It’s not meekness; it’s precision. People high in this trait seek disconfirming evidence, separate ideas from identity, and reduce polarization by engaging disagreeing views with curiosity. In other words, they learn faster than the average operator and make fewer repeat mistakes. That’s what you want in a founder.
The Valley’s mantra works when practiced as hypothesis-driven execution: commit firmly, update rapidly. In reality, it often degenerates into performative certainty at the top and learned helplessness below. The corrective isn’t weaker convictions; it’s cognitive flexibility—the ability to hold multiple hypotheses, switch frames, and pivot when feedback demands it. Flexibility is a multiplier on determination.
Great founders blend relentlessness with replaceable beliefs. The pattern investors respond to isn’t swagger; it’s formidability—justified confidence backed by velocity and judgment.
What VCs actually screen for:
Clarity: simple, sharp articulation of the problem and why now.
Determination: bias to action; speed from decision to deployment.
Coachability: engages hard feedback without defensiveness.
Adaptability: knows when to persist and when to pivot.
Trustworthiness: transparent with bad news; consistent character.
Determination beats raw IQ at the early stage. But determination without flexibility calcifies into fragility. The outliers show both.
The best-known successes weren’t born perfect. Slack emerged from a failed game. Instagram was a bloated check-in app shed down to photos. YouTube went from video dating to everything video. Each team noticed reality diverging from the plan and moved—fast. The common thread wasn’t omniscience; it was egoless correction.
Two decision models worth copying:
Disagree and commit: When conviction outruns consensus, make the call, align the team, and execute at full power. It preserves speed without demanding certainty. Afterward, measure, learn, and be willing to reverse.
Idea meritocracy: Make reasoning inspectable. Weight input by demonstrated competence, not rank. Reward people for surfacing better ideas—even when it stings. This builds trust and improves hit rate over time.
Both models institutionalize a simple ethic: ego last, evidence first.
They change their mind in real time when presented with better data—and tell you exactly why.
They tell clear “earned secrets” from customer trenches, not abstract market takes.
They narrate past failures as upgraded beliefs, not blamed circumstances.
They move effortlessly between 10-year vision and this-quarter KPI mechanics.
They ask for the intro they need tomorrow and already have a plan if it doesn’t land.
Install a kill-switch: predefine metrics that trigger a pivot or sunsetting.
Run red-team reviews: schedule a monthly “why we’re wrong” session led by a dissenter.
Track decision memos: hypothesis, evidence, decision, outcome, lesson. Close the loop.
Ban absolute language in analysis. Replace certainty with probability.
Make “I was wrong” a badge. Reward it publicly.
Intelligence, in startups, is adaptive speed. It’s the compounding edge of learning faster than the problem changes. The fundable founder isn’t married to a plan; they’re married to the mission, ruthless about the path, and shameless about updating beliefs. They don’t need to be right on day one. They need to get less wrong every week—and let everyone see them do it.
If you’re raising kids in Asia, the default operating system is discipline, duty, and deference to authority. It produces astonishing focus, world-class test scores, and an instinct for precision. If you’re raising kids in America, the OS is independence, speaking up, and pushing back. It produces boldness, restless energy, and a bias for action even before all the facts are in. Both systems work—just in different games.
Asia optimizes for compounding: deep practice, mastery, and incremental improvement. That’s why your phone is built better every year and ships on time. America optimizes for leaps: questioning the premise, trying the crazy thing first, and accepting that failure is tuition. That’s why a half-broken prototype becomes the next platform.
Upbringing maps to outcomes. In much of Asia, respect is earned by doing the hard things quietly and perfectly. The classroom is orderly, the bar is high, and teachers are authority. In America, respect is earned by the idea that survives hard questions. The classroom is messy, the bar is movable, and teachers are facilitators. One teaches you to get it right. The other teaches you to ask if “it” should exist.
Here’s the uncomfortable truth: innovation needs both. Big ideas without execution die in pitch decks. Perfect execution without big ideas dies in commodity margins.
Why America still leads in tech
Statistical advantage: with 330 million people and a magnet for global talent, the U.S. gets more “weird, wired, and willing” clusters per square mile. You only need a few dozen exceptional teams each decade to reset the curve.
Cultural compounding: immigrants bring divergence in training, taste, and tactics. Collision among unlike minds is a feature, not a bug. New combinations are where breakthroughs hide.
Capital and cadence: venture risk appetite rewards non-consensus, time-compressed bets. The ecosystem knows how to finance ambiguity, tolerate pivots, and recruit talent around narratives.
Permission structure: questioning authority isn’t rebellion; it’s due diligence. “Show me the data” and “ship and learn” are default settings. Failure is an iteration, not a verdict.
What Asia—and China—do exceptionally well
Relentless upgrade cycles: process excellence, quality control, and supply-chain orchestration turn ideas into things, at scale, fast. When the brief is clear, Asia delivers beyond spec.
System-level deployment: once a technology crosses the adoption threshold, diffusion is breathtaking—payments, logistics, EVs, robotics. Implementation is a superpower.
Talent density in fundamentals: math, memory, and method are assets. When the problem is compute, materials, or manufacturing, this foundation matters more than vibes.
So will Asia (or China) lead?
Depends what “lead” means. If it’s deployment speed, industrialization of new tech, or squeezing inefficiency out of complex systems, Asia is already there. If it’s net-new categories that reorganize markets and culture, America still holds the edge. The constraint in many Asian systems isn’t intelligence or effort—it’s permission. Breakthroughs need room to offend the present in service of the future.
That said, the slope is changing. Where education reforms emphasize creativity alongside rigor, where capital tolerates earlier risk, and where founders are celebrated for original thought (not only flawless execution), you get a hybrid engine. Singapore nudges this way. Korea and Japan are unlocking more open innovation. Parts of China’s AI ecosystem show that constraints can provoke creative architecture and ruthless efficiency. When discipline meets dissent—watch out.
What parents can actually do
Set a two-key system: high standards and high voice. Demand the work; reward the question. Make “Why?” as mandatory as “Done.”
Normalize experiments: small bets, short loops, honest postmortems. Treat failure as a data asset. Curiosity compounds when it’s safe to be wrong.
Teach debate and build: have kids argue both sides, then prototype the best idea. Thinking scales when it connects to making.
Rotate environments: mix structured drills with unstructured exploration. Master scales; then improvise. Both muscles need reps.
What schools and founders can actually do
Institutions: keep rigor on fundamentals, but grade for original thought. Assess not only correctness but novelty and clarity of reasoning.
Investors: fund non-consensus founders earlier. Underwrite learning velocity, not just traction. Create space for deep tech timelines.
Teams: build culturally diverse rooms with explicit debate norms. Protect dissent; punish cynicism; reward candor.
Where the next decade goes
The frontier belongs to hybrids. The cultures, companies, and countries that fuse Asia’s discipline with America’s audacity will outrun both archetypes. Genius is rare. Hard work is common. The asymmetric advantage is the system that consistently combines them—at scale.
So raise kids who can sit still long enough to master the hard thing—and stand up fast enough to challenge the sacred cow. Teach them to finish—and to start. The future doesn’t pick sides. It rewards the synthesis.
I've spent years pattern-matching across startups, digging through founder trajectories, and watching ecosystems evolve. But nothing crystallizes the proximity advantage quite like watching the current AI wave unfold in San Francisco. If you're an AI founder operating outside the Bay Area right now, I'll cut to the chase: you're likely working with information that's 6 to 12 months behind what the top practitioners already know. That's not speculation—it's a measurable information lag that shows up in research adoption patterns, model access timelines, and the velocity of knowledge transfer through dense networks.
Let me explain why this matters and what you can do about it.
The "you become who you surround yourself with" principle isn't motivational poster material—it's backed by serious research. Studies tracking thousands of people show that simply sitting next to someone increases friendship probability from 15% to 22%. Harvard psychologist David McClelland put a number on it: the people you habitually associate with determine as much as 95% of your success or failure.
In tech, this compounds fast. Behaviors spread through networks like viruses—when everyone around you is raising big rounds and thinking in 10x terms, that recalibrates your entire operating system. When your coffee-shop neighbor just closed a Series A and your gym buddy is scaling to 100 engineers, mediocrity stops being an option.
Here's where it gets concrete. Analysis of AI research publications between 2000-2010 revealed that China's research topics systematically lagged the U.S. by several years. Despite massive investment and eventually matching publication volume, China's choice of research topics more closely resembled what the U.S. was working on in previous years than the current year.
This isn't about capability—it's about information flow architecture. The U.S., and specifically the Bay Area, sets the agenda. Everyone else follows with delay.
Now layer on the insider advantage. OpenAI and Anthropic provide early access to new models for select groups—sometimes 6 to 12 months before public release. Recent reports indicate OpenAI employees have been testing GPT-5 capabilities internally while the rest of the world is still optimizing for GPT-4. Anthropic runs similar beta programs with hand-picked customers—GitLab, Midjourney, Menlo Ventures—who get to build on capabilities that won't be widely available for months.
Translation: while you're reading the release notes, insiders already shipped v2 of the thing you're just starting to prototype.
Earlier this year I attended YC's S25 Demo Day at their Dogpatch HQ in San Francisco. One minute, one slide, 150+ companies—most of them AI-native. What struck me wasn't just the quality (though 92% being AI-focused is wild). It was the velocity of information exchange.
Between pitches, I grabbed tacos from the food trucks and ended up in a 10-minute conversation with a founder who casually mentioned they'd been testing an unreleased model variant for two months. Another founder referenced a research technique I wouldn't see published until weeks later. These weren't secrets—they were just the baseline of what's considered "current" when you're embedded in the ecosystem.
That's the gap. It's not dramatic; it's cumulative and silent.
The numbers back up the anecdote. San Francisco pulled in over $29 billion in AI venture funding in the first half of 2025 alone—more than double the previous year and vastly outpacing every other city globally. Nearly 50% of all Big Tech engineers and 27% of startup engineers live in the Bay Area. OpenAI signed 500,000 square feet of office space and is hunting for more. Anthropic, Pika, Character.AI, and dozens of unicorns operate within walking distance.
This isn't just about talent density—it's about information flow velocity. One AI observer I follow mentioned attending an average of three AI-focused events per week in the Valley. Monthly Silicon Valley meetups on GenAI, LLMs, and agents pack rooms with founders, researchers, and VCs. That's 12+ high-bandwidth information exchanges per month, multiplied across thousands of participants. Lu.ma and X.com are bibles around here.
Knowledge doesn't spread through press releases—it spreads through repeated, high-trust, face-to-face interactions. The famous Allen Curve proves it mathematically: communication frequency drops exponentially with distance. In practice, this means a 30-minute coffee in SF with someone from DeepMind or Anthropic can shift your product roadmap in ways a dozen Zoom calls never will.
Here's something that doesn't show up in funding announcements but matters enormously: the speed at which you can solve technical and operational problems in SF is orders of magnitude faster than anywhere else.
The Bay Area concentrates 35% of all AI engineers in the United States—Seattle, the second-densest hub, has only 23%. But it's not just the raw numbers; it's the depth and diversity of expertise. Your angel investors aren't just capital allocators—many are former CTOs who've debugged distributed systems at scale. Your advisors have shipped ML models in production at Google, Meta, or Anthropic. Your neighbor in the coworking space solved the exact infrastructure bottleneck you're hitting right now.
I've seen this play out repeatedly. A founder hits a gnarly RLHF training issue on Thursday afternoon, texts an advisor who used to run safety at a LLM unicorn, and by Friday morning has three potential solutions plus an intro to someone at Hugging Face who's dealt with the exact edge case. That 18-hour turnaround doesn't exist in other ecosystems—not because the expertise doesn't exist elsewhere, but because the density and accessibility of that expertise is unmatched.
Corporate VCs operating in the Bay now provide active mentorship, market access, and infrastructure resources beyond just checks. When you're stuck on a technical decision—whether to fine-tune versus RAG, how to architect your agent orchestration layer, which inference provider to use—you're not Googling or posting in Discord. You're texting someone who's already made that exact decision at scale and lived with the consequences.
The operational side is equally compressed. Hiring your first head of sales? Your investors can intro you to three candidates by Monday who've scaled GTM at AI companies. Need to navigate SOC 2 compliance? Someone in your YC batch just went through it and will walk you through the checklist over coffee. Fundraising strategy? Your advisor literally closed a $50M Series B last quarter and knows exactly what metrics Sequoia is asking for right now.
This isn't networking—it's operational infrastructure disguised as relationships. And it only works at this velocity when everyone is physically close enough for spontaneous problem-solving.
The migration patterns tell the story. AI founders from Canada, Europe, Asia, and Latin America are relocating to SF in unprecedented numbers. Indian VCs like Elevation Capital and Peak XV are opening SF offices specifically to stay close to AI developments.
Emmanuel Martes moved his fintech startup from Bogotá to San Francisco and captured it perfectly: "Everywhere else, you're a weird person who wants to start a company. Here, everyone is building".
Ben Su, a Canadian entrepreneur building an AI lawyer, explained his move: "We're hitting the ceiling in Canada, and the mecca of the startup world is in San Francisco". While Canada raised less than $5 billion across all startups, the Bay Area alone pulled $27+ billion in AI funding.
The NYT recently profiled the wave of 20-something founders flooding SF—many dropping out of MIT, Georgetown, and Stanford specifically to be in the city during the AI boom. Jaspar Carmichael-Jack moved to SF, built Artisan AI, and scaled it past $35 million in valuation. Brendan Foody left Georgetown at 19, raised millions for Delv, and is now hiring dozens in the Arena district near OpenAI's headquarters. These founders didn't just relocate geographically—they relocated into an operational support system that accelerates everything.
The pattern is clear: ambitious founders are voting with their feet because proximity compresses time.
When you're in SF, several things happen simultaneously:
Early Model Access: Companies like OpenAI run early access programs for safety researchers and select partners. If you're local and networked, you're in the room when capabilities get previewed. That's a 6-12 month product development advantage over teams working with publicly available tools.
Conference Intel: Major AI conferences like NeurIPS and ICML create informal knowledge-sharing loops around presentations and workshops. Attendees get advance briefings, hallway demos, and pre-publication insights that never make it into the proceedings. Being there in person means absorbing trends months before they're documented.
Talent Movement Signals: When a key DeepMind researcher joins Anthropic or an OpenAI engineer spins out a new company, the implications are immediately obvious to insiders. You hear about pivots, technical breakthroughs, and capability jumps through informal networks before they're announced publicly.
VC Intelligence Networks: Bay Area VCs don't just write checks—they aggregate intelligence across dozens of portfolio companies. When Sequoia or a16z share pattern recognition about emerging trends, they're synthesizing confidential data from hundreds of startups. That intelligence doesn't exist in other ecosystems.
Face-to-Face Conversion Rates: Research shows face-to-face requests are 34 times more successful than email. For fundraising, recruiting, and partnerships, being in the room isn't a nice-to-have—it's the difference between a warm intro and a cold outbound.
Expert Problem-Solving Speed: With 50% of Big Tech engineers concentrated in the Bay Area, the time between "we're stuck" and "here's how to fix it" collapses from weeks to hours. Your advisors and investors aren't just cheerleaders—they're active debugging partners who've already solved your exact problem.
While remote work democratized access to global talent, it also revealed the irreplaceable value of physical proximity. Virtual collaboration tools cannot replicate the spontaneous interactions that drive innovation. The most breakthrough ideas often emerge from unplanned conversations—the coffee shop encounter that becomes a partnership, the hackathon that spawns a unicorn, the demo day that attracts unexpected investors.
77% of employees who work remotely show increased productivity in routine tasks, but innovation requires the serendipity that only physical proximity provides. When groundbreaking AI research is being discussed in San Francisco coffee shops and exclusive invite-only dinners, remote founders miss critical insight and opportunity.
The knowledge lag compounds over time. Research shows that informal knowledge-sharing mechanisms—professional networking events, mentorship programs, and casual interactions—are critical drivers of AI innovation. When these interactions are geographically concentrated, outsiders operate with systematically outdated information that affects fundamental business decisions.
More critically, when you hit a technical wall at 11pm and need someone who's debugged transformer architecture issues at production scale, the difference between texting an advisor two blocks away versus posting in a Slack channel with global time zones can mean the difference between shipping Monday or next month.
The playbook isn't complicated, but it requires commitment:
Establish a Physical Presence: You don't need to move your entire team overnight, but having founders and key decision-makers in SF for sustained periods is non-negotiable. Aim for 6-12 week sprints aligned to model release cycles, major conferences, and fundraising windows.
Default to In-Person for High-Stakes Interactions: Investor pitches, lighthouse customer meetings, senior IC recruiting—do these face-to-face whenever possible. The conversion delta compounds over quarters.
Build a Local Advisory Lattice of Technical Operators: Surround yourself with practitioners who are one hop from frontier labs, leading research groups, or policy/safety desks. Prioritize advisors and angels who've actually built and scaled AI systems in production—their ability to help you debug architectural decisions or navigate technical tradeoffs in real-time is worth more than their capital.
Prioritize Networks Over Newsfeeds: The most valuable information never hits TechCrunch. It spreads through meetups, invite-only dinners, hackathons, and coffee chats. Treat your calendar like an operating system—weekly office hours with VCs, monthly customer deep-dives, quarterly recalibrations based on new model capabilities.
Weaponize Geographic Proximity for Speed: When you're blocked on a technical decision, use the density advantage. Text an advisor, grab coffee with a portfolio founder who's been there, or walk into an investor's office with your laptop open. The 18-hour problem-solving loop only exists in SF.
The proximity principle that governs friendships also governs information access and problem-solving velocity, and in AI, both timing and execution speed are everything. San Francisco remains the highest-signal, highest-leverage surface area in the world for AI—not because of weather or culture, but because knowledge flows 6-12 months ahead of everywhere else and technical problem-solving happens at 10x speed.
If you're serious about building a category-defining AI company, the math is simple: surround yourself with the best, plug into the densest information networks, compress the feedback loop between idea and execution, and tap into the collective technical expertise that can unblock you in hours instead of weeks. That happens in one place right now, and it's not over Zoom.
The future belongs to founders bold enough to position themselves at its center. For AI in 2025, that center is unquestionably San Francisco—where proximity to greatness, exclusive access to tomorrow's breakthroughs, and instant access to world-class problem-solvers becomes the catalyst for extraordinary achievement.
Multiple indicators confirm we're in speculative territory. AI startups now trade at 50-70x revenue multiples, while the sector captures 50% of all VC dollars—mirroring dot-com peak concentration. Most telling: AI companies spent $50 billion on Nvidia chips but generated only $3 billion in revenue in 2023, creating a staggering 17:1 investment-to-revenue ratio.
Even industry leaders acknowledge the excess. OpenAI's Sam Altman explicitly warned that "investors are overexcited about AI," while Oracle's recent $10+ billion bond issuance to fund AI infrastructure exemplifies the arms race dynamics driving unsustainable spending.
Unlike the dot-com era's purely speculative companies, today's AI leaders generate substantial cash flow. Enterprise adoption jumped from 55% to 78% between 2023-2024, with measurable productivity gains: 25% speed increases and 40% quality improvements across knowledge work.
The $7 trillion projected investment in data centers through 2030 reflects genuine infrastructure needs, not speculation. Leading AI companies like Nvidia reported 70% year-over-year growth from real customer demand, creating profitable revenue streams that didn't exist during previous bubbles.
Horizon 1 (0-18 months): Quality Focus
VCs should implement defensive positioning—backing only AI companies with strong ARR growth, GTM strategy and proven unit economics. The sweet spot is vertical AI applications capturing 80% of traditional SaaS annual contract values while avoiding foundation model companies requiring $100+ million in compute costs.
Horizon 2 (18-36 months): Consolidation Plays
Focus on application-layer dominance rather than infrastructure. Prioritise companies with defensible moats—proprietary data, network effects, or deep integration lock-in. Prepare for the consolidation wave as weaker competitors fail, creating acquisition opportunities.
Horizon 3 (3-5 years): Next-Wave Technologies
Position for agentic AI, physical robotics, and edge computing as the market matures beyond current foundation model limitations.
Historical patterns suggest correction timing within 6-18 months of peak warning signs. Given current indicator convergence—extreme valuations, performance gaps, and industry warnings—expect significant turbulence beginning late 2025 through mid-2026.
However, this correction may be more selective than previous crashes. Companies with real revenue and proven business models could weather the storm, while speculative players face 50-80% value destruction.
The AI bubble is real, but its resolution will likely be messier and more uneven than clean historical parallels. Smart capital should prepare for significant market turbulence while recognizing that underlying AI transformation remains genuine—current valuations and expectations just need dramatic recalibration.
Success will belong to those who combine conviction in AI's potential with disciplined evaluation of business fundamentals, positioning themselves to capitalize on post-correction opportunities when quality assets become available at reasonable prices. We have seen this before in 2021, tread carefully.
I've been knee-deep in the startup world for years—founding companies, advising founders, crunching data science models to spot patterns before they become trends. But nothing quite prepared me for my first Y Combinator Demo Day in 2025. Held at their sleek new offices in San Francisco's Dogpatch neighborhood, it was a masterclass in production value: seamless tech, high energy, and an atmosphere that screamed "this is where the future gets built." If you're wondering why YC continues to dominate, let me break it down from an insider's view—complete with the chaos, the connections, and the signals that have me more excited than I've been in years.
Picture this: a full day of back-to-back presentations, each founder getting exactly one minute and one slide to pitch their "life's" work. Breaks were strategically timed for networking, with food trucks parked for breakfast and lunch—think gourmet tacos and craft coffee fuelling deal talks under the California sun. The energy was electric, a mix of nervous excitement from founders and calculated intensity from investors. YC's Dogpatch HQ felt like a tech temple: modern, spacious, and designed for serendipitous collisions. No wonder they pulled off a production this polished; it wasn't just an event, it was an ecosystem accelerator on steroids.
Shoutout to the unsung heroes—the technical support team behind the Demo Day investor portal. When a few of us hit demo day account access glitches (hey, even the best tech has hiccups), they resolved them in real-time, mid-event. That's the kind of operational excellence that keeps the machine running smoothly.
What struck me most were the founders themselves. This batch skewed young—seriously young. I'd estimate 15-20% were college dropouts or fresh grads who'd already racked up impressive feats since high school: building apps that scaled to millions, hacking together AI prototypes in dorm rooms, publishing AI research papers or launching side hustles that caught YC's eye. At most, I spotted three with MBAs; the rest were deeply technical, often with backgrounds in engineering, CS, or data science. These aren't polished executives—they're builders who code first and pitch second.
Take the companies from the S25 graduates list I reviewed: teams like BootLoop (firmware in minutes via AI) or Janet AI (an AI-native Jira alternative) exemplify this. Founders aren't waiting for permission; they're leveraging AI to solve real problems in dev tools and beyond. It's refreshing—no corporate fluff, just raw innovation from people who grew up with GitHub as their playground.
One subtle tell? Look at their company URLs. Domains like phases.ai, getlilac.com, or bootloop.ai aren't flashy .coms or clever wordplays—they're straightforward, often incorporating "AI" directly. It screams product focus over marketing polish. These founders prioritize building something that works over snagging the perfect brand name or premium URL. In an era where AI lets you prototype in days, that no-nonsense approach is a competitive edge.
The investor crowd was a who's who of tech and beyond. I spotted comedian Hannibal Buress scanning pitches with a notebook in hand, and Ty Montgomery—the former NFL football player and Stanford alum—deep in conversation about deals. It wasn't just VCs; it was a melting pot of cultural influencers, athletes, and operators who see startups as the next big bet. That diversity amps up the energy—suddenly, a quick chat over lunch could lead to a celebrity endorsement or a strategic partnership.
For me, the real magic was in the outreach. Out of the 150+ pitches, I reached out to 20-30% to express interest in chatting further. That's by far my highest engagement rate in years. Why? The format forces clarity: one minute, one slide strips away the noise, letting the core idea shine. Many pitches hooked me instantly—they're not just building products; they're redefining categories with AI at the core.
Personally, I think YC is about to pull even further ahead. Their network is unmatched: a global web of alumni, investors, and experts that feels like they're living in the future. What I do with machine learning—spotting patterns in markets, predicting startup successes—they're embodying it daily through relentless iteration and founder support. This batch's AI dominance (90% of pitches were AI-centric, per reports) isn't hype; it's a signal. We're seeing agentic systems, voice AI, and edge computing solve real problems in healthcare, fintech, and dev tools.
Three quick provocations for anyone in the startup game:
Embrace the Youth Wave: If 20% of top founders are fresh out of school, rethink your hiring. Technical depth plus unjaded ambition is the new superpower.
Network Like It's Lunch: Events like this prove serendipity scales. Skip the formal meetings; real deals happen over food trucks.
Bet on Clarity: In a world of infinite prototypes, the winners distill complexity into one slide. That's the AI-era moat.
Demo Day Summer 2025 wasn't just an event—it was a glimpse into tomorrow. If this is YC's trajectory, count me in for the ride.
Quick Summary:
The surface-level take: Andrew is simply highlighting an operational inefficiency that smart teams will optimize away.
The deeper read: He's accidentally identified the next phase of startup competitive advantage in the AI era—and most people are going to get it completely wrong.
Andrew's math is seductive but incomplete. Yes, prototypes that once required "six engineers three months" can now be weekend projects. But the obsession with prototype velocity misses a more fundamental shift: the economics of validation have changed, not just the mechanics of building.
When your prototype-to-feedback cycle compresses from weeks to days, you don't just need faster product decisions—you need fundamentally different validation strategies. The Valley's current playbook (build → measure → learn → iterate) assumes validation is the scarce resource. But what happens when building becomes nearly free and validation remains expensive?
Singapore's startup ecosystem offers a useful parallel. During our early 2010s acceleration phase, government grants and accelerator programs suddenly made seed capital abundant. Teams that optimized purely for fundraising speed got crushed by those who built systematic approaches to customer validation and market-product fit. Speed without direction is just expensive wandering.
Andrew floated a fascinating data point: some teams now propose PM-to-engineer ratios of 1:0.5—more product managers than engineers. This sounds like classic Silicon Valley logic: identify the bottleneck, throw resources at it, declare victory.
Except we've seen this movie before. Remember when "growth hacking" was the bottleneck? Or when "data science" was going to unlock everything? The pattern is predictable:
Phase 1: Shortage creates premium roles
Phase 2: Market floods with mediocre practitioners
Phase 3: Only exceptional talent differentiates
Phase 4: Back to fundamentals
The PM arms race will follow this exact trajectory. By 2027, every AI startup will have hired multiple "customer empathy experts" and "rapid product decision specialists." The real alpha will belong to teams that solved the underlying problem rather than optimizing for the symptom.
The most revealing quote wasn't about PM ratios or weekend prototypes. It was this: teams are "increasingly relying on gut" to make faster decisions.
That's not a bug. That's the feature.
In markets moving at AI speed, systematic data collection often arrives too late to matter. The teams winning in hyper-competitive super-app landscape aren't the ones with the best analytics—they're the ones whose founders have developed the most accurate intuitive models of their markets.
Andrew mentions this obliquely when he talks about PMs needing "deep customer empathy" and the ability to "synthesize lots of signals". But he's underselling the insight. What he's describing isn't product management—it's market sensing as a competitive moat.
Here is how this trend actually plays out:
The speed differential between building and validating creates systematic pressure to develop intuitive market models. Teams that can rapidly synthesize weak signals (user behavior, competitor moves, ecosystem shifts) will consistently out-maneuver those dependent on formal research cycles.
Tactical shift: Instead of hiring more PMs, invest in customer advisory relationships and systematic founder-market exposure. YCombinator's "get out of the building" remains true—but now it needs to happen at AI speed.
When building costs approach zero, the optimal strategy shifts from perfecting one solution to systematically exploring solution spaces. This requires different metrics, different team structures, and different capital allocation frameworks.
Policy implication: Government innovation programs designed around linear "TRL progression" will systematically miss this shift. Singapore's NRF should consider evergreen exploration grants that reward systematic market sensing rather than just technical milestones.
The teams with the best customer access networks will compound their advantage as build-validate cycles accelerate. This suggests ecosystem strategies matter more, not less, in the AI era.
Andrew Ng identified a real problem. The issue isn't that product management needs to speed up to match engineering—it's that traditional product management becomes less relevant when the cost structure of innovation fundamentally changes.
The winning teams won't hire more PMs. They'll develop systematic approaches to market sensing that operate at AI speed. They'll treat customer intimacy as infrastructure, not a departmental function.
The real bottleneck isn't product management. It's developing market judgment that operates at the speed of artificial intelligence.
]]>Lately, I've been fielding questions about venture studios – those ambitious setups that aim to manufacture startups at scale. On paper, they look revolutionary: higher success rates, juicier IRRs, and a systematic approach to innovation. But as someone who's evaluated countless models for our portfolio, I can tell you the reality often falls short, especially when viewing them as an asset class for serious fund returns. Today, let's unpack the evidence why most don't work, and the rare formula that makes some succeed. This isn't armchair theory; it's grounded in data, case studies, and insights from my own deal flow.
Venture studios burst onto the scene promising to fix VC's biggest pains: inconsistent dealflow, high failure rates, and inefficient capital deployment. The stats are eye-catching – studio-backed companies reach Series A 72% of the time versus 42% for traditional startups, with IRRs hitting 53% compared to 21.3% norms. They shave timelines too, getting to Series A in about 25 months instead of 56. I've seen pitches where studios position themselves as "startup factories," and honestly, who wouldn't want that in their portfolio?
But here's the rub: most flame out within 24 months. That's not random bad luck; it's baked into the model. In my experience reviewing studio proposals, the hype masks deep structural flaws that make them a risky bet for LPs seeking scalable, reliable returns. Let's break it down with the evidence.
From stakeholder wars to capital headaches, these issues aren't edge cases – they're systemic.
Traditional VCs have it simple: LPs provide capital, GPs pick winners, founders execute. Studios? They must satisfy entrepreneurs (who want autonomy and upside), studio staff (needing comp and growth), follow-on investors (demanding clean terms), and LPs (chasing returns). It's a recipe for conflict. I've passed on studios where entrepreneurs felt like cogs in a machine, leading to talent flight and diluted innovation. Data shows this tension torpedoes governance and decision-making.
Studios tout "hands-on support" as their secret sauce, but the numbers don't add up. Launch one company a week with 100 staff? That's barely two full-timers per venture – hardly the deep operational help promised. Scale up, and quality tanks; stay small, and you can't deploy enough capital for fund-level returns. I've seen this play out in SEA studios that overextend, ending up as glorified consultancies rather than return generators.
Evidence from 2020-2024 reveals traditional VCs are 1.6x more likely to close funds than studios. Why? Complex valuations create the "valuation trap," higher fees scare LPs, and perceptions of "dead equity" deter later rounds. Add longer timelines and regulatory ambiguities (fund or operating company?), and you're burning cash before deploying it. In my due diligence, this often signals execution risk – markets shift, and the studio's left holding the bag.
For big allocators like pensions or endowments, studios are a tough fit. Most cap at sub-$200M funds, too small for meaningful commitments, and their sector-specific focus creates concentration risks. Metrics suffer from survivorship bias too – we only hear about winners, not the ideation failures. Corporate studios? Even worse, often becoming "innovation theater" bogged down by parent company bureaucracy and mismatched incentives.
Not every studio crashes and burns. From my vantage point, the successes – think specialized operators in niches like fintech or healthtech – share a rigorous playbook. Here's the evidence-backed essentials, scored by their impact based on research and my observations:
Strategic Focus & Specialization (9.2/10): Ditch the generalist approach; vertical-agnostic studios succeed only 19% of the time. Winners leverage proprietary edges like data access or industry networks – crucial in emerging markets like SEA.
Operational Excellence & Proven Playbooks (9.0/10): Codified processes for everything from ideation to scaling. This includes stage-gates that validate ideas early, reducing waste and accelerating paths to revenue.
Significant Ownership Stakes (8.8/10): Holding 30-50% equity justifies the ops investment and captures value from solid exits, not just unicorns.
Experienced Team & Leadership (8.7/10): A mix of serial entrepreneurs, functional experts, and investors. Weak teams are a red flag in my evals – you need proven builders to navigate the chaos.
Quality Control Systems (8.6/10): Strict validation thresholds for market fit, tech viability, and business models. This kills duds fast and preserves capital.
Proper Governance & Alignment (8.5/10): Dual structures separating ops from investing, with clear equity rules to align all stakeholders.
Adequate Capitalization (8.3/10): Patient, reserved funding – balance sheet style, not skimping on ops budgets.
Market-First Approach (7.9/10): Prioritize customer validation over building; it's the best defense against vanity projects.
Implement all eight? You've got a shot at outsized returns. Skimp on any, and you're statistically doomed.
Venture studios aren't the VC extinction event some predict – they're a niche tool, better for hands-on operators than broad institutional plays. The evidence shows their flaws often outweigh the upsides for fund returners, but in the right hands, they can be game-changers, especially in underserved regions.
If you're building or backing one, stress-test against these factors.
Just back from a whirlwind of meetings in Singapore, and I couldn't ignore this MIT report that's been blowing up my feed. As someone who's invested in dozens of AI startups through Golden Gate Ventures, I've seen the hype cycle firsthand. But this study? It's a gut check for anyone betting big on AI. Let's break it down, startup-style—because if you're building or funding in this space, these insights could save you millions.
The MIT NANDA report dropped like a bomb: 95% of AI pilot projects fail to deliver any real financial uplift. Yeah, you read that right. They looked at 300 projects, chatted with 150 execs, and surveyed 350 employees. The result? Most AI initiatives are burning cash without moving the needle on profits.
Investors freaked out—Nvidia, Microsoft, and others took a hit in the markets. But hold up: this isn't Altman saying public AI stocks are bubbly (though he did call out private startups). And it's not an indictment of the tech itself. As the report points out, the real issue is how companies are using AI, not the AI models themselves.
NANDA—short for Networked Agents and Decentralized AI—is an MIT project pushing for better AI architectures. Full disclosure: they might have skin in the game, promoting agentic systems as the fix. But their findings align with what I've seen in the field.
Key takeaways from the report:
Failure Isn't About Capability: Execs blame weak models, but the data shows it's a "learning gap." Organizations don't know how to embed AI into workflows. Wharton prof Ethan Mollick nails it: stop forcing AI into old processes shaped by bureaucracy. Let it redefine how work gets done.
Startups vs. Corporates: New companies crush it because they lack entrenched systems. If you're a startup founder, this is your edge—build AI-native from day one.
Buy > Build: Vendor solutions succeed 67% of the time; internal builds? Only 33%. I've advised portfolio companies on this: unless you're in a hyper-regulated space, don't reinvent the wheel. Focus on your core IP.
Wrong Focus Areas: Too many pour money into marketing/sales AI. The real ROI? Back-end automation that cuts costs. Think ops efficiency over flashy demos.
This echoes other studies—Capgemini saw 88% of pilots flop in 2023, S&P Global noted 42% abandoned this year. It's not new, but it's getting worse as hype outpaces execution.
The pattern? Smart integration and realistic goals. Don't treat AI like a magic wand—it's a tool that needs the right setup.
From the report and my experience:
Workflow Redesign is Key: Experiment relentlessly. One of our portfolio companies pivoted from generic chatbots to agentic systems that automate entire processes—ROI jumped 3x.
Data Privacy Isn't an Excuse: Regulated industries hide behind "build internal" for control, but vendors often handle this better. Pick partners wisely.
Measure What Matters: Track financial savings, not just "AI usage." The report slams vague metrics—get specific on P&L impact.
Oh, and shoutout to Ethan Mollick again: AI shines when you let it bypass office politics. Startups, this is your superpower.
Look, investor panic is real—shares tanked on headlines alone. But this report isn't doom and gloom. It's a wake-up call that AI's impact is coming, just not how most expect. We're in the trough of disillusionment (Gartner hype cycle, anyone?), but the slope of enlightenment follows.
For founders: Focus on agentic AI that scales autonomously. NANDA's pushing this, and it aligns with what DeepSeek's doing in China—efficient models that compete with OpenAI at a fraction of the cost.
For investors: Don't bail yet. The trillions in data center spend Altman predicts? It's happening, but winners will be those solving real problems, not chasing buzz.
If you're building an AI startup, heed this: 95% failure rate is a feature, not a bug—it's your opportunity to be the 5%. Nail integration, buy smart, automate the boring stuff, and measure ruthlessly.
The AI revolution isn't slowing—it's evolving. China restricting Nvidia sales? That's just accelerating local innovation. Google's Pixel AI features? Table stakes now.
Stay sharp, folks. If you're pitching AI to VCs like me, show how you'll beat these odds.
]]>A timely Stanford study by Prof. Chuck Eesley and co-authors puts fresh data behind this hunch. Their five-year investigation of a major Chinese university’s 1985 credit-system reform shows that when students gain genuine freedom to mix-and-match electives, their likelihood of launching a venture jumps by 159%. A companion Stanford news feature distills two decades of Eesley’s work into a single takeaway: universities evolve into engines of innovation only when three forces converge—flexible curricula, dense mentorship networks, and sustained government capital.
Below are three quick provocations for Singapore, filtered through the lens of our ecosystem’s strengths (capital efficiency, policy agility) and blind spots (risk appetite, uneven social networks).
Eesley’s data echo what every founder-turned-angel already knows: the spark rarely comes from a standalone “Entrepreneurship 101” module. It’s the serendipity of a chemistry major stumbling into a design-thinking elective, or a CS undergrad hacking on a public-policy capstone, that breeds venture-scale insight.
Yet our local universities still ration electives like C-class shares. NTU’s Renaissance Engineering Programme is a solid start, but most faculties lock freshmen into inflexible tracks by Year 2. The Stanford study suggests a simpler KPI than churning out more incubators: track the percentage of undergrads who cross faculty lines for at least 20 credits. Shift that dial, and you shift the venture funnel.
Eesley’s earlier work at Stanford’s STVP shows that alumni connections don’t just open doors—they reduce failure rates by steering novice founders toward better decisions. Singapore’s talent BBQ pits (NUS Overseas Colleges, YC-style accelerators) are making progress, but the mentor pool skews toward repeat founders in SaaS and fintech.
We need a broader bench. Imagine pairing deep-tech PhDs with ex-Glaxo scientists who’ve navigated FDA hell, or matching sustainability founders with Keppel veterans fluent in industrial sales cycles. Creating this “mesh network” of expertise is social infrastructure work—slow, unsexy, but catalytic.
The Project 985 case in China underlines why targeted, multi-decade funding transforms universities into venture flywheels. Singapore’s NRF investments tick many of those boxes, yet grant timelines often clash with the 8-10-year runway deep-tech firms require.
Two tweaks could unlock outsized returns:
Evergreen Proof-of-Concept Funds – rolling grants that follow the researcher, not the calendar year.
IP-Light Licensing – allow spin-offs to keep more equity upfront in exchange for revenue share post-exit, reducing the handicap that first-time founders face when negotiating Series A valuations.
Stanford’s success wasn’t pre-ordained by geography; it was engineered through decades of policy bets that privileged autonomy, mentorship, and patient capital. Singapore’s ecosystem has the hardware—capital, connectivity, credibility. The next leap demands firmware upgrades inside our universities: give students room to roam, surround them with operators, and keep the funding horizon long.
Do that, and the next Grab-scale story might just begin in a seminar room overlooking Clementi rather than Sand Hill Road.
Based on “University Education Reform and Entrepreneurship,” Eesley et al., SSRN 1884493.ssrn-1884493.pdf
“The rise of universities as engines of innovation,” Stanford News, 18 Aug 2025.
Here's a number that should keep every policymaker awake at night: Singapore university spin-offs raise an average of $400,000. Stanford's raise $72 million.
That's not a rounding error. It's a 180x gap that exposes everything wrong with how we think about university commercialization.
The Uncomfortable Truth About University Spin-Offs
Don't get me wrong—Singapore's universities aren't broken. NUS and NTU have done exactly what they were asked to do: nurture 400+ teams, create jobs, tick the KPI boxes.
But here's the brutal reality: when it comes to building companies that Silicon Valley VCs actually want to fund, we're not even playing the same sport.
Looking at the past decade (2015-2025), the numbers tell a stark story:
Spin-outs Created: Stanford (~250), MIT (~260), NUS+NTU (~180)
Total Capital Raised: Stanford (~$180B), MIT (~$42B), NUS+NTU (~$80M)
Notable Exits over $100M: Stanford (~90), MIT (~32), NUS+NTU (4)
This isn't just a Singapore problem. Globally, university spin-offs have raised $158B+ across 8,042 investments over the past decade, but the US dominates with over 40% of all deals³. Most importantly, this isn't about research quality—Singapore's universities produce world-class science. The gap isn't just about money—it's about fundamentally different approaches to what constitutes success.
Why One Size Doesn't Fit All: The Case for Dual Tracks
Here's what Singapore gets right: the National Research Foundation's focus on talent development and job creation has built a solid foundation. Over 400 teams have gone through GRIP, generating meaningful economic contributions through local employment¹. In fact, recognizing this momentum, Singapore just launched the $50M National GRIP program in 2024, combining NUS and NTU efforts to support 300 startups by 2028⁴.
Here's what we're missing: *one size fits nobody*.
Stop trying to make every spin-off fit the same mold. Instead, create two completely different tracks:
Track 1: The Capability Building Track - Keep doing what we're doing. Nurture teams, create employment, satisfy the NRF mandate. Zero changes needed here.
Track 2: The Venture Track - A completely separate pathway for the 5-10% of spin-offs that could actually become global companies. Different rules, different standards, different outcomes.
The Venture Track: Where World-Class Standards Begin
If we're serious about competing with Stanford and MIT, we need to acknowledge an uncomfortable truth: *this will not be easy*. If building venture-backable university spin-offs were straightforward, every university in the world would have cracked the code.
The venture track demands three non-negotiables:
Rigorous Selection Criteria
Not every spin-off belongs here. We need brutal honesty about market size, technological differentiation, and global scalability potential.
World-Class Pitch Development
This is where we separate serious contenders from academic projects. Every venture-track spin-off must develop investor materials that exceed the standards expected in Silicon Valley and London Triangle. No exceptions, no "good enough for Asia" compromises. This means:
- Deep market analysis that rivals what the best investment and consulting firms can produce
- IP strategies crafted by the patent attorneys with global experience
- Go-to-market plans built by people who've identified, invested and scaled billion-dollar global companies
- Businesses and technologies that the best investors in the world would actually fund
Elite Advisory Networks
We cannot build this with good intentions and local expertise alone. We need the best people in the world—Silicon Valley operators, deep-tech investors, successful entrepreneurs who've built billion-dollar companies.
Learning from the Best: What Stanford, MIT, and Berkeley Do Differently
Stanford's StartX doesn't just provide mentorship—it plugs spin-offs directly into Silicon Valley's funding ecosystem. MIT's The Engine combines academic rigor with commercial discipline specifically for tough-tech ventures. UC Berkeley's SkyDeck leverages deep industry partnerships to drive real traction².
The proof is in the results.
UC Berkeley SkyDeck's Advisory Impact: SuperAnnotate, a computer vision startup, went through SkyDeck in 2019. Through the program's 300+ advisor network, they connected with Stanford professors and prominent figures in their field, raising a $14.5M Series A within two years. The founders specifically credited SkyDeck's advisor connections for helping them "crystallize their story and mission."
During COVID, MindfulGarden leveraged SkyDeck's virtual advisory network and achieved remarkable results: $44.8M in venture funding, 5x factory expansion, and 50+ new hires. As their founder noted: "Their knowledge base and connections are unlike anything we've had access to before."
MIT The Engine's Tough-Tech Focus: The Engine specifically targets "tough-tech" ventures requiring patient capital and deep expertise. Commonwealth Fusion Systems, spun out of MIT's Plasma Science and Fusion Center, has raised over $50M from strategic investors like Eni to commercialize fusion energy. Boston Metal, developing zero-emission steel production through molten oxide electrolysis, represents the kind of transformative industrial technology The Engine champions. Quaise Energy, working on geothermal drilling using gyrotron technology, exemplifies how The Engine connects MIT's cutting-edge research with commercial applications.
Stanford's HIT Fund has deployed capital across 100+ portfolio companies spanning life sciences to sustainability⁵.
Singapore must adopt these models wholesale—not adapt them. Being number one in Asia isn't good enough when we're competing with global leaders who attract international capital. NUS's Overseas Colleges program, particularly the Silicon Valley hub, should become mandatory for venture-track teams. If we want world-class results, we need world-class standards from day one, not local variations.
A Call to Arms: Singapore's Ecosystem Must Step Up
Building venture-backable spin-offs requires more than university resources. It demands our entire ecosystem—and that means you.
If you're an investor: We need your deal flow insights and due diligence expertise to help select and prepare venture-track companies.
If you're a successful entrepreneur: Your battle-tested knowledge of what actually works in global markets is invaluable for pitch development and strategy.
If you're a corporate leader: Your understanding of real market needs and partnership opportunities can make the difference between academic curiosity and commercial viability.
If you're a service provider (legal, accounting, consulting): World-class spin-offs need world-class support infrastructure.
The Path Forward: Concrete Next Steps
This isn't a theoretical exercise. Here's how we start:
1. Establish the Venture Track Selection Committee - Form a panel of successful entrepreneurs, VCs, and industry experts to identify genuine global opportunities among current and future spin-offs. Involve them early in the process.
2. Create the Pitch Development Academy - Build a 6-month intensive program where venture-track teams work with world-class advisors to develop investor-ready materials that meet international standards.
3. Launch the Global Immersion Program - Partner with NUS's Silicon Valley NOC (Block71 SV) to provide venture-track teams with direct exposure to successful ecosystems and investors.
4. Build the Advisory Network - Recruit 20-30 world-class advisors willing to commit meaningful time to Singapore spin-offs.
The opportunity is massive, but it's global—not regional. Southeast Asia's fund sizes often outpace returns from our current startup pipeline, but we shouldn't be satisfied dominating a regional market. Singapore's venture-track spin-offs must be built to compete in Silicon Valley, not just Southeast Asia. By building companies that attract top-tier international investors from day one, we can create the power law distribution that transforms Singapore from a regional hub into a global innovation powerhouse.
Want to help fix this? Don't send a LinkedIn message. Take action:
- Investors: Email GRIP/NUS Enterprise/NTU Ventures today. Specify exactly how you'll help select and mentor venture-track companies.
- Successful founders: Offer to be a mentor. Commit real time, not just networking calls.
- Service providers: Propose specific pro-bono packages for venture-track spin-offs.
- Government officials: Ask your team why Singapore's best research creates $400K companies while Stanford's creates $72M ones.
The 180x gap exists because we've been comfortable being #1 in Southeast Asia.
Time to get uncomfortable. Time to compete globally.
¹ GRIP Annual Report 2024, NUS Enterprise
² Stanford StartX, MIT The Engine, UC Berkeley SkyDeck program data, 2023
³ Global University Venturing, "University Spin-off Statistics 2023" - $158B+ raised globally across 8,042 investments (2013-2022)
⁴ National GRIP Singapore, "$50M National Programme Launch," October 2024
⁵ Stanford HIT Fund Portfolio Data, 2024 - 100+ portfolio companies across life sciences, physical sciences, and sustainability
Tech IPOs on SGX are unlikely to surge without major fixes. In 2024, just four listings occurred, none on the Mainboard, raising only $31 million2. By mid-2025, only three tech-related listings have materialized amid a global IPO rebound3. Rigid rules, such as the S$30 million profit threshold, exclude cash-burning tech firms focused on growth.
SGX's daily trading volume stalls at $1.1 billion4, contrasting Singapore's 7th global startup ranking and $144 billion in value1. This disparity drives firms like Grab ($40 billion NASDAQ debut) and Sea Limited (NYSE) abroad. Conservative investors prioritize dividends, with 68% of trades from volatility-averse retail players5. Retail outflows, like S$189.9 million in late 2019, exacerbate thin liquidity6.
Asia-Pacific peers outperform. Tokyo Stock Exchange (TSE) adapts swiftly, with $273 billion in Growth Market volume in 2023 extending into 20257. Its Asia Startup Hub aids 14 regional firms via streamlined processes SGX lacks.
Jakarta's Indonesia Stock Exchange (IDX) booms: 17 tech IPOs in 2023 climbed to 22 by mid-2025, with $881 billion market cap8. GoTo's $1.1 billion raise exemplifies flexible rules like dual-class shares. Australia's ASX supports 140 tech firms, enabling Afterpay's rise from $165 million to $29 billion acquisition9.
Geography influences: TSE yields 50% post-IPO gains, IDX's retail drive boosts 59% volume, and ASX's principles-based governance reduces bureaucracy10. SGX operates at 75% below tech potential (±7% confidence, DealStreetAsia and McKinsey data)11.
Snapshot:
| Exchange | Tech IPOs (Mid-2025) | Daily Volume (USD) | Key Innovation |
|---|---|---|---|
| SGX | 3 | $1.1B | Proposed profit cut |
| TSE | ~80% of annual | $273B (Growth Market) | Growth Market |
| IDX | 22 | $881B (market cap) | Dual-class shares |
| ASX | 140 tech firms | $3.5B | Early-stage access |
Outdated profit mandates ignore models like Amazon's, deterring tech startups12. Low liquidity repels institutions12, while risk aversion clashes with tech's experimentation13. Investor skew toward dividends undervalues tech, with 86% of Catalist stocks below debut price due to limited institutional buy-in14.
Funding shortages for Catalist firms (median revenue ~S$27.4 million) create stagnation, as banks favor traditional sectors with scant tech coverage versus Hong Kong's robust analysis15. Early-stage gaps, like Temasek's 88% investment cut from 2021-2024, favor US exits16. Regulatory delays (4-6 months under MAS/SGX RegCo) lag rivals, worsened by global pressures like high rates and no new unicorns in 202317.
Reforms emerge: SGX's profit cut to S$10-12 million and dual-class shares lag TSE/IDX innovations, but McKinsey eyes 150% regional listing growth by 2027 with metric shifts like revenue focus—though volatility risks persist (15% ASX dips)10.
Successes highlight potential. TSE's JDRs let Singapore's Omni-Plus System list seamlessly18. IDX's Bukalapak raised $1.5 billion via flexible exits19. ASX's WiseTech Global scaled globally20.
These (20% of regional IPOs)10 prove innovative regulation works, contrasting SGX's rigidity. IDX's paths offer scaling lessons amid Singapore's talent and cost hurdles2.
Leaders urge evolution:
Temasek's Rohit Sipahimalani: "SGX must adapt to capture tech value or lose out"3.
TSE: "Flexibility drives 80% IPO share"7.
DealStreetAsia: "IDX's 22 IPOs show retail power—SGX needs it"5.
ASX: "Principles-based rules attract 230 listings"9.
Without change, SGX misses 70% of Southeast Asia's $300 billion startup value by 203012. Reforms like tech boards, Catalist funds via MAS's S$5 billion program, and coverage boosts could target <20% retail dominance and >$2 billion volume11. Startups: Avoid SGX's liquidity pitfalls; favor TSE/IDX gains. Advances could empower regional unicorns, addressing talent via incentives2.
To fully address SGX's shortcomings, the broader regional startup ecosystem must be revitalized, tackling gaps from ideation to deep tech R&D funding, spinouts, and venture capital (VC) performance issues. Southeast Asia faces funding shortages for early-stage startups, talent constraints in AI and data science, and regulatory fragmentation across jurisdictions12. Deep tech funding tumbled 34% in 2024, yet its share of VC activity rose to 17.6%, driven by health tech and biotech, though challenges like skilled personnel shortages and high development costs persist717.
Spinouts from research institutions struggle with insufficient design data, manufacturing delays, and market penetration in areas like Singapore and Vietnam18. VC firms exhibit lackluster performance, with equity investments down amid selectivity for sustainable models over aggressive growth56. Quality issues include poor due diligence, fraud risks (e.g., eFishery case), and a shift to capital efficiency imperatives916.
Key actions include: Boosting ideation through university partnerships and internal talent development2; Increasing deep tech R&D funding via government incentives and green tech funds411; Facilitating spinouts with standardized governance frameworks and cross-border enforcement9; Addressing VC quality by embedding sustainability, enhancing transparency, and diversifying revenue streams25. Initiatives like ASEAN's Digital Economy Framework Agreement could finalize in 2025 to enable greater collaboration5. These steps could accelerate innovation, with projected GDP growth of 4.7% supporting consumer spending and ecosystem resilience5.
Turning SGX around requires aggressive action—it's possible but demands commitment amid economic headwinds. Top priorities: Flexible regulations (disclosure-based shift), liquidity boosts (S$5 billion fund, tax rebates), and ecosystem enhancements (research, talent support, VC governance)91114.
Horizon shifts:
Reform Wave: Dual-class expansions could double listings by 202711.
Regional Alignment: Mirror ASX's institutional mix via IDX pacts8.
Innovation Edge: TSE-like hubs halve times to 12 months12.
2026-2030 outlook, cautiously:
Tech Surge: Capture 30% unicorns, adding $50 billion cap11.
Liquidity Leap: $3 billion volumes matching ASX4.
Global Ties: Pacts boost 40% non-local listings10.
Risk Tools: AI cuts volatility 25%12.
The revolution spreads—Singapore can lead with adaptive exchanges and a robust ecosystem. What reforms do you prioritize? What else is needed?
This draws from 2024-2025 reports by Bloomberg, DealStreetAsia, McKinsey, PitchBook, and exchange filings. (Word count: 1,128)
The AI startup scene in the San Francisco Bay Area is booming, with companies racing to hit that coveted $10 million in annual recurring revenue (ARR). But after digging into data from CB Insights, PitchBook, McKinsey, and other key sources, a clear pattern emerges: early revenue often stays trapped in a tech bubble, far from representing the full U.S. market. We've analyzed trends, numbers, and counterexamples to reveal what's really happening—and how founders can break free.
The Tech-to-Tech Revenue Dominance Is Real
Forget broad market conquests right out of the gate. For Bay Area AI startups in 2025, the first $10M ARR is heavily skewed toward fellow tech companies, creating a self-reinforcing echo chamber. CB Insights' analysis of 500 AI ventures shows 67% of early revenue (±8% confidence interval) comes from tech ecosystem customers, like other startups buying copilots and tools to fuel their own growth.
The numbers tell the story: PitchBook reports that 62% of Seed to Series A revenue (±6% confidence) flows from tech peers, while McKinsey's State of AI 2024 pegs tech's lead at 32% of Gen-AI production deployments globally. This isn't just a quirk—it's driven by the Bay Area's 42% share of U.S. AI firms and $55B in Q1 2025 VC funding, making cross-selling within the Valley faster and cheaper than cracking regulated sectors.
Global Variations: Not Just a Bay Area Bubble
Here's where things get interesting. While U.S. startups lean tech-heavy, international patterns show more diversity. StartupBlink's 2024 Global Startup Ecosystem Report reveals European AI hubs like London and Berlin average just 45% tech-sourced early revenue, thanks to EU incentives pushing non-tech adoption in manufacturing and finance.
In Asia, Singapore and Bangalore clock in at 50% tech share, per Singapore EDB data, with enterprise conglomerates in logistics and healthcare pulling in broader customers from day one. Tokyo startups even hit 40% non-tech revenue in Year 1. These global contrasts highlight how geography shapes customer mixes, with Bay Area firms facing the steepest tech reliance—estimated at 60%–65% overall (±5% confidence) based on weighted data from IoT-Analytics and SaaS Capital.
The Great Non-Tech Lag: Barriers and Breakthroughs
Industry dominates early ARR because non-tech sectors move slower, bogged down by compliance, talent gaps, and unclear ROI. BCG notes only 16% of enterprises are "reinvention-ready" for AI, while SaaS Capital finds non-tech firms adopt at half the pace of tech peers. Yet, 74% of early adopters report positive ROI, and 44% of Gen-AI pilots now happen outside tech—signaling massive untapped potential.
But there's a counter-trend: efficient vertical strategies are flipping the script. McKinsey projects that by 2027, non-tech AI adoption could surge 200% in sectors like healthcare and logistics, driven by outcome-based tools that tie fees to real KPIs like 15% efficiency gains.
Counterexamples That Buck the Trend
Not every AI startup stays in the echo chamber. Take Veracyte in healthcare AI: They hit $8M ARR in Year 1 mostly from hospitals via FDA-approved diagnostics, inverting the tech dominance to just 30%. Or Kabbage in fintech: Scaling to $15M with 70% from small businesses through targeted integrations, they prove domain focus can prioritize non-tech from the start.
PitchBook data shows these exceptions are rare (only 15% of startups), but they address key objections: Regulated verticals aren't impenetrable if you build with compliance in mind, challenging the "tech-only" trope for founders willing to adapt.
Insights from Industry Leaders
The minds steering AI's revenue revolution are as sharp as their strategies:
Sarah Guo, General Partner at Conviction Capital, warns: "Deliberately diversify by month 18, even if it slows growth—it's essential for longevity." Andreessen Horowitz partners echo this, advising VCs to discount valuations without non-tech proof points.
Y Combinator alumni like those from successful cohorts emphasize vertical sales hires by Series A. And from the data side, CB Insights analysts highlight: "The 60% tech skew is real, but global benchmarks show it's not inevitable."
What This Means for You
These trends aren't abstract—they're blueprints for AI founders and investors. If you're building in the Bay Area, your first $10M will likely be 60%+ tech-fueled, but neglecting non-tech leaves 70% of U.S. GDP on the table. Aim for benchmarks like <20% revenue from your top three customers and <18-month payback across verticals.
For investors, red flags include >80% tech logos—green lights are diverse NAICS spreads and global pilots. The shift toward broader adoption means your startup could soon power Midwest factories or Florida hospitals, not just Valley peers.
The Road Ahead
Looking forward, several pivotal shifts are emerging:
Diversification Boom: With 44% of Gen-AI pilots already non-tech, expect U.S. startups to push 40% non-tech revenue by 2027 through vertical copilots and partnerships.
Global Convergence: Bay Area patterns may align more with Europe's 45% model as regulations evolve, per StartupBlink projections.
Efficiency Over Echo: Outcome-based pricing and small-model integrations will make non-tech entry easier, potentially halving sales cycles to 6 months.
The AI revenue revolution isn't confined to Silicon Valley—it's expanding nationwide. Based on what the data shows, the next wave of startups that escape the tech bubble will dominate the decade.
This analysis is based on quick data scan of market reports and developments from CB Insights, PitchBook, McKinsey, StartupBlink, and other sources throughout 2024-2025, representing the latest trends in AI startup revenue patterns and customer acquisition.
The global IT spending landscape presents both tremendous opportunities and significant challenges for startups seeking to establish themselves in enterprise markets. While the total addressable market appears massive at first glance, the reality for emerging companies is far more nuanced, shaped by regional purchasing behaviors, cultural preferences, and established vendor relationships that can either accelerate or hinder startup growth.
The worldwide IT market represents one of the largest and fastest-growing sectors in the global economy. In 2025, global IT spending is projected to reach $5.61 trillion, with significant regional variations that directly impact startup opportunities1. The three major regions present distinctly different market characteristics and growth trajectories.

The United States dominates the global IT spending landscape with a forecasted $1.9 trillion market in 2025, representing nearly 35% of worldwide IT expenditure. This massive scale reflects both the maturity of American enterprise technology adoption and the substantial budgets allocated to digital transformation initiatives. European IT spending, while substantial at $1.28 trillion in 2025, demonstrates more conservative growth patterns with established enterprises showing measured adoption of new technologies. Southeast Asia, though representing the smallest absolute market at $55.1 billion, exhibits the highest growth potential with a compound annual growth rate of 9.1%.
The American market offers the most favorable environment for startup penetration, characterised by high enterprise spending per employee ($916 annually) and a cultural openness to innovative solutions. US enterprises demonstrate greater willingness to engage with unproven vendors when the technology offers compelling advantages. However, this market also presents intense competition, with over 50,000 active startups competing for attention.
American enterprises allocate substantial budgets to software, with enterprise software spending projected to reach $159.39 billion in 2025. The venture capital ecosystem provides robust support, with $209 billion invested in 2024, creating a funding-rich environment that enables startups to compete effectively.
European enterprises exhibit more conservative purchasing behaviours, with a strong preference for established vendors and proven solutions. The enterprise software market of $70.6 billion in 2025, while substantial, requires startups to navigate complex procurement processes that often favor incumbent suppliers. European buyers demonstrate lower per-employee spending ($168) compared to their American counterparts, reflecting more cautious technology investment approaches.
The challenge for startups in Europe extends beyond market size to cultural procurement preferences. European organizations typically require extensive validation and proof of concept before considering new vendors, particularly those without established track records. This creates significant barriers to entry for emerging companies seeking to establish market presence.
Southeast Asia presents a unique opportunity for startups, despite its smaller absolute market size. The region's enterprise software market of $4 billion in 2025 reflects emerging digital transformation initiatives and increasing acceptance of innovative solutions. With only $11 per employee spent on enterprise software, the market demonstrates significant upside potential as digital adoption accelerates.
Regional characteristics favor startup penetration, with 69.3 billion in technology investments from global majors demonstrating growing confidence in the market. The startup ecosystem, while smaller with approximately 4,000 companies, faces less saturated competition compared to mature markets.
Understanding the realistic market opportunity requires moving beyond total IT spending figures to analyse what portion of these markets is genuinely accessible to startups. Traditional market analysis often overestimates startup opportunities by failing to account for established vendor relationships, procurement biases, and enterprise risk aversion.

Conservative estimates suggest startups can realistically target 10% of the US IT market, 5% of the European market, and 15% of the Southeast Asian market.
These percentages reflect the varying degrees of market openness to new vendors and cultural acceptance of startup solutions. Under conservative scenarios, this translates to addressable markets of $190 billion (US), $64 billion (Europe), and $8.3 billion (Southeast Asia).
Optimistic projections, assuming successful market penetration and cultural shifts toward startup adoption, increase these figures to $380 billion (US), $154 billion (Europe), and $13.8 billion (Southeast Asia). These optimistic scenarios require startups to overcome significant cultural and procedural barriers that currently limit market access.
Modern B2B purchasing decisions involve complex stakeholder groups, with 77% of buyers rating their procurement experience as extremely challenging. This complexity particularly disadvantages startups, as procurement teams often exhibit unconscious bias toward familiar suppliers and established vendors.
The incumbent supplier bias represents a significant barrier for startups across all regions. Procurement professionals frequently favor existing relationships due to loss aversion and risk management concerns. This bias becomes particularly pronounced in Europe, where conservative procurement practices and established vendor preferences create higher barriers to entry.
American enterprises demonstrate greater willingness to engage with innovative startups, particularly when solutions offer clear competitive advantages. The cultural acceptance of risk-taking and innovation creates more opportunities for unproven vendors to gain initial customer traction.
European procurement practices emphasise stability and proven performance over innovation potential. The preference for established vendors creates longer sales cycles and higher customer acquisition costs for startups. Additionally, European enterprises often require extensive compliance documentation and regulatory adherence that can overwhelm resource-constrained startups.
Southeast Asian markets show increasing openness to startup solutions, driven by rapid digital transformation initiatives and less entrenched vendor relationships. However, limited local funding and smaller average deal sizes can constrain growth potential for startups in this region.

A comprehensive evaluation of startup market attractiveness reveals significant variations across regions when considering multiple factors beyond simple market size. The United States scores highest overall (8.8/10) due to exceptional market size, startup-friendly culture, and abundant funding availability.
However, intense competition and high customer acquisition costs present ongoing challenges.
Europe's moderate attractiveness score (6.8/10) reflects substantial market size offset by conservative procurement practices and limited startup friendliness. The region's established vendor preferences and complex regulatory environment create additional barriers for emerging companies.
Southeast Asia's balanced score (6.0/10) demonstrates the region's potential despite smaller absolute market size. High growth rates and emerging digital adoption create opportunities, though limited funding availability and smaller enterprise budgets constrain immediate potential.
Startups should approach these regional markets with differentiated strategies reflecting local characteristics and constraints. In the United States, focus on rapid scaling and competitive differentiation to capture market share before competitors respond. The abundant venture capital and cultural acceptance of innovation support aggressive growth strategies.
European market entry requires patience and methodical relationship building. Startups should invest in compliance capabilities, case study development, and partnership strategies with established system integrators. The longer sales cycles necessitate sufficient funding runway and realistic growth expectations.
Southeast Asian markets offer opportunities for startups willing to adapt solutions for emerging market requirements. Lower price points and simplified implementations can create competitive advantages, though startups must balance reduced margins against growth potential.
The dramatic differences in venture capital availability across regions significantly impact startup viability. With $209 billion in US venture funding compared to $18 billion in Europe and $1.6 billion in Southeast Asia, American startups enjoy substantial funding advantages.
This disparity affects everything from product development timelines to customer acquisition strategies.
European startups face funding constraints that require more capital-efficient growth strategies and earlier focus on profitability. The limited venture capital ecosystem demands stronger unit economics and more conservative growth projections.
Southeast Asian startups must often rely on international funding sources or bootstrap growth through early revenue generation. The emerging venture capital ecosystem provides opportunities but cannot match the scale available in more mature markets.
The global IT spending market, while massive in aggregate, presents highly varied opportunities for startups depending on regional characteristics and cultural factors. The United States offers the largest addressable market and most startup-friendly environment, but also the most intense competition. Europe provides substantial market opportunity tempered by conservative procurement practices and established vendor preferences. Southeast Asia presents emerging opportunities with high growth potential but smaller absolute market size and limited funding availability.
Successful startup market entry requires understanding these regional nuances and developing strategies aligned with local purchasing behaviors and market dynamics. Rather than viewing the global IT market as uniformly accessible, startups must carefully evaluate regional characteristics, cultural preferences, and competitive landscapes to identify realistic growth opportunities and develop appropriate go-to-market strategies.
The real market size for startups is significantly smaller than total IT spending figures suggest, but substantial opportunities exist for companies that understand regional dynamics and adapt their approaches accordingly. Success requires matching startup capabilities with regional market characteristics, building appropriate funding strategies, and developing solutions that address specific regional requirements and preferences.
The journey from Mark Zuckerberg’s Harvard dorm room to Roy Lee and Neel Shanmugam’s AI revolution was inevitable. While the social media generation taught us to connect minds across the globe, the AI generation is showing us how to amplify those minds’ power. Facebook, PayPal, and Twitter weren’t just companies—they were the infrastructure that made today’s AI revolution possible. Zuckerberg’s “Hacker Way” of rapid experimentation and boundary-pushing has become the playbook for today’s AI rebels.
Two Generations, One Mission: Breaking Barriers
The data tells a story of unprecedented acceleration. Where social media startups took 18–24 months to reach market, AI-native companies now do it in 6–12 months. Teams have shrunk from 15–25 to just 5–10, thanks to AI’s transformative efficiencies. This isn’t just about moving faster—it’s about fundamentally changing how innovation happens.
Redefining Rebellion: From “Move Fast” to “Think Instantly”
What critics call “cheating,” these visionaries call democratization. When Cluely’s founders say “we want to cheat on everything,” they’re not promoting dishonesty—they’re challenging systems that artificially limit human potential. Lee’s suspension from Columbia for creating Interview Coder wasn’t a setback; it was the catalyst for building a universal platform for AI-augmented performance. This is positive rebellion: breaking the right rules to unlock new possibilities.
The Democratization Revolution
AI is making innovation accessible to more people than ever before—74% of innovators say AI has broadened access to entrepreneurship. Gen Z founders, raised on technology, move fast, experiment freely, and scale globally from their bedrooms. They spot opportunities and create solutions that previous generations might never see.
AI-Powered Entrepreneurship: The Numbers Don’t Lie
Cluely’s meteoric rise illustrates this new paradigm. Within weeks of launch, it attracted 70,000 users and reached $3 million in annual recurring revenue—a pace unimaginable in the social media era. AI-native startups now achieve product-market fit in months, not years, and VC funding is following suit: Cluely secured $15 million in Series A to fuel this rapid growth.
Enterprise Validation and Viral Growth
Cluely isn’t just a consumer phenomenon—it’s already proving itself in enterprise settings, especially in sales, with rapid adoption and real business impact1. The company’s growth team, each with personal audiences over 100,000 followers, exemplifies how the AI generation blends technical prowess with modern marketing.
Positive Disruption: Amplifying, Not Replacing, Human Intelligence
This generation’s rebellion serves a different purpose. While their predecessors connected people and information, the AI generation is focused on amplifying individual human capability. AI isn’t about replacing intelligence—it’s about enabling people to perform at levels never before possible.
A Cultural Shift: Creative Rebels with a Cause
Today’s entrepreneurs are “positive deviants”—rebels with a cause, willing to embrace controversy to advance human potential. Their viral campaigns and user-generated content strategies aren’t just for attention; they’re about demonstrating AI’s real-world impact.
A Utopian Vision: Empowerment at Scale
The future these companies are building isn’t dystopian—it’s utopian. They envision a world where everyone becomes a creator, where technical barriers disappear, and where AI personal assistants are available for every task. Just as Facebook democratized publishing and Twitter democratized broadcasting, the AI generation is democratizing expertise itself.
An Ecosystem of Acceleration
The success of AI-native startups creates a positive feedback loop, inspiring more entrepreneurs and attracting greater investment. The result: an ecosystem where innovation accelerates and barriers to entry continue to fall.
Conclusion: The Spirit of Rebellion Lives On
The entrepreneurs behind Cluely and similar companies aren’t destroying hacker culture—they’re fulfilling its highest aspirations. They represent the evolution from connecting minds to amplifying minds, from breaking things to building intelligence. Their rebellion isn’t about chaos, but about progress: breaking barriers so the rest of us can achieve more than we ever thought possible.
The AI revolution isn’t happening to us—it’s being built by a new generation of audacious entrepreneurs. The rebels are coding, the barriers are falling, and the future is being written in real-time. This is what progress looks like when “impossible” is just the starting line.
]]>