Rob Englander, a software engineer with more than four decades of industry experience, has published a pointed critique arguing that AI and LLM code generation tools do not simplify software engineering — they accelerate what he calls "spec drift," allowing code to be produced faster than the surrounding engineering discipline can keep up with. Writing at robenglander.com, Englander contends that code generation was never the hard part of software development; the genuinely difficult work has always been architecture, specification, validation, and understanding how complex systems behave under real conditions. By treating AI as a replacement for engineering discipline rather than a productivity aid, organizations are compounding complexity rather than reducing it.

Englander draws a direct historical parallel to Visual Basic in the 1990s, which genuinely democratized application creation but did not eliminate the need for engineering rigor. Organizations that treated it as an expertise eliminator eventually rediscovered that "producing software artifacts is not the same thing as engineering reliable systems." He identifies a recurring four-phase pattern across four decades: a new tool appears, productivity spikes and demos impress, the industry declares engineering discipline obsolete, and complexity quietly compounds until the prediction collapses. His concern is that the current LLM wave operates at a more fundamental layer of the stack than prior tools, making the spec drift dynamic faster and more severe than previous cycles.

Englander uses an aircraft maintenance analogy to ground the argument: aviation saw decades of tooling improvement — computerized diagnostics, digital manuals, AI-assisted telemetry — and never concluded that trained mechanics were therefore redundant. He argues that software engineering lacks aviation's regulatory forcing function, which is why the "expertise is now optional" fallacy keeps recurring. In a companion whitepaper titled "Engineering Alignment in Probabilistic Generation," he builds a theoretical model arguing that LLMs relocate rather than eliminate the correctness problem, with correctness silently degrading at "interpretive boundaries" between specification, generated artifacts, and runtime behavior — boundaries that current prompt engineering and evaluation practices do not govern.

Engineer <a href="/news/2026-03-14-nyt-ai-coding-assistants-end-of-programming-jobs">layoffs citing AI productivity</a> have accelerated across the industry, and Englander is direct in his assessment: these reductions represent not a genuine productivity breakthrough but a convenient justification for bad business decisions. It's worth noting that these are Englander's own claims, drawn from one practitioner's perspective rather than independent research. But his core concern — that systems now processing payments, managing infrastructure, and operating services customers depend on daily are being built with less engineering rigor than before — deserves scrutiny regardless of whether readers share his conclusions. The stakes, he argues, are considerably higher than in prior cycles.