Oxford says “gut.” I say “objective + proof.”

Oxford’s The Impact of Artificial Intelligence on Venture Capital argues AI accelerates sourcing and diligence, but investment decisions stay human because durable moats are socially grounded conviction, gut feeling, and networks.

I agree with the workflow diagnosis. I disagree with the implied endgame.

Not because “gut” is fake—but because “gut” is often a label we apply when we haven’t defined success tightly enough, or when we don’t have a measurement loop that forces our beliefs to confront outcomes.

Dealflow is getting commoditized. The edge is moving.

AI expands visibility, speeds up pipelines, and pushes the industry toward shared tools and shared feeds. When everyone can scan more of the world, “who saw it first” decays.

But convergence of inputs does not imply convergence of results. The edge moves from access to learning rate.

The outlier problem isn’t mystical. It’s an evaluation problem.

Oxford’s strongest point is that the power-law outliers are indistinguishable from “just bad” in the moment, and that humans use conviction to step into ambiguity.

I accept that premise and I still think the conclusion is wrong.

Because “conviction” is not a supernatural faculty. It’s a policy under uncertainty. And policies can be evaluated.

If your decision rule can’t be backtested, it’s not conviction. It’s narrative.

Don’t try to read souls. Build signals you can audit.

Some firms try to extract psychology from language data. Sometimes it works as a cue; often it’s noisy. And founders adapt as soon as they sense the scoring system.

So the goal isn’t “measure personality with high accuracy.” The goal is: build signals that are legible, repeatable, falsifiable and then combine them with a process that forces updates when reality disagrees.

Verification beats vibes.

If founders optimize public narratives, then naive text scoring collapses into a Goodhart trap.

The difference between toy AI and investable AI is verification: triangulate claims, anchor them in time, reject numbers that can’t be sourced, and penalize inconsistency across evidence.

That’s how you turn unstructured noise into features you can actually test.

Status is a market feature—not a human moat.

Networks and brand matter because markets respond to them—follow-on capital, recruiting pull, distribution, acquisition gravity.

So yes: status belongs in the model.

But modeling status is not the same thing as needing a human network as the enduring edge. One is an input signal. The other is a claim about irreducible advantage.

If an effect is systematic, it’s modelable.

Objective function: I’m optimizing for fund outcomes.

A lot of debates about “AI can’t do VC” hide an objective mismatch.

If your target is “eventual truth at year 12,” you’ll privilege a certain kind of human judgment. If your target is “realizable outcomes within a fund horizon,” you’ll build a different machine.

I’m comfortable modeling hype—not because fundamentals don’t matter, but because time and liquidity are part of the label. Markets pay for narratives before they pay for final verdicts, and funds get paid on the path, not just the destination.

The punchline

Oxford is right about current practice: AI reshapes the funnel, while humans still own the final decision and accountability.

My reaction is that this is not a permanent moat. It’s a temporary equilibrium.

Define success precisely. Build signals that survive verification. Backtest honestly. Update fast.

That’s not gut.

That’s an investing operating system.