A developer post circulating on Hacker News this week grounds a critique of vibecoding in a 40-year-old computer science framework — and the argument is harder to dismiss than most takes on AI-assisted development. Drawing on Peter Naur's 1985 essay "Programming as Theory Building," the post defines vibecoding strictly as shipping LLM-generated code without reading or understanding it, then argues this practice produces legacy software from the first commit. Naur's model holds that code is a byproduct of a programmer's mental model of a problem — the theory. Strip out that theory-building step and you're left with code no one can explain: why this architecture, why this library, which edge cases were considered and which weren't. Without that mental model, the author argues, there is no basis for coherent long-term maintenance.
Coding agents compound the problem structurally. LLMs add code far more readily than they remove it, and they condition users to treat generation as cheap. As codebases grow in token count without corresponding growth in human understanding, future LLM calls increasingly struggle to fit the full codebase into context to reason about it. The author hedges — larger context windows or net-negative code generators could change the picture — but argues neither is today's reality. The practical prediction: <a href="/news/2026-03-15-developers-ai-coding-tools-skill-atrophy-team-friction">vibecoding-oriented companies hit growth and longevity walls</a> as their codebases outpace both LLM context capacity and whatever shallow theory can be reconstructed from a README.
One unverified comment in the HN thread added a charged data point. User ting0 alleged — without linking a source — that Anthropic has publicly acknowledged Claude Code is entirely vibe-coded and vibe-maintained, calling it among the least stable developer tools they've used, with each release reportedly introducing new bugs or re-introducing previously patched ones. Anthropic has not responded to the characterization. If the claim holds up, it would place one of the field's most prominent AI coding agents squarely inside the instability pattern the post describes.
That argument — that vibecoding companies are a short position, the more "vibey" the shorter — doesn't rest on AI skepticism. It rests on a specific mismatch: codebase complexity compounds faster than LLM reasoning capacity scales. The Naur framing gives practitioners a concrete vocabulary for <a href="/news/2026-03-14-agile-manifesto-ai-addendum-prioritizing-shared-understanding-over-shipping">why AI-assisted codebases become intractable to modify past a certain scale</a>. That's a more durable claim than most hot takes on vibecoding, and it will outlast this week's thread.