Alvin Pane has written one of the more uncomfortable essays in recent AI discourse: a short-form argument that Cursor, Claude Code, and tools like them are wired, by design, to feel productive whether or not they are.

The core mechanism is dopamine. Pane draws on Wolfram Schultz's 1999 research at Cambridge, which established that the brain's reward circuits fire not on outcomes but on the prediction of outcomes. That distinction matters here. Streaming tokens, auto-generated scaffolding, commit velocity dashboards — Pane's claim is that these features aren't incidental UX choices. They're delivering precisely the neural signal that the brain interprets as forward progress, regardless of what's actually shipping.

To frame the trap, Pane borrows Will Manidis's concept of the 'tool-shaped object' — originally applied to bloated AI infrastructure — and applies it specifically to developer tooling. A tool-shaped object reproduces the texture of real work: the friction, the rhythm, the sense of momentum. What it doesn't reproduce is the output. By this definition, the best AI IDEs aren't tools so much as simulations of tools. The question Pane is asking isn't whether they generate code — they do — but whether engineers have any reliable way to distinguish token velocity from deployment velocity without deliberately stepping back to look.

The essay's structural claim is a crossover point at roughly 80% build completion. Below that threshold, AI coding tools are genuinely useful: they compress boilerplate, accelerate prototyping, and reduce cognitive overhead in ways that map directly onto shipped output. Above it, the work that remains — integration edge cases, production hardening, performance tuning — doesn't yield to AI acceleration in the same way. The tools keep generating. The feeling of progress persists. The output doesn't follow.

Pane isn't arguing that these tools are bad, or that engineers are naive for using them. He's identifying something more specific: a failure mode that activates precisely because the tools work so well in the early stages. By the time the crossover happens, the neurochemical pattern is established. The environment gives no signal that anything has changed. The instinct to check — to look up from the terminal and ask whether the last three hours of generation actually moved the product forward — is exactly the instinct that gets eroded.

That's the argument, anyway. Whether it holds up against engineering teams using these tools at scale is a harder question, and one the essay doesn't attempt to answer.