A developer asked an AI assistant to solve a tricky algorithm problem and got a confident, detailed answer. It was wrong. The AI had proposed a greedy approach; only after the developer supplied a counterexample did it abandon that path and arrive at a correct dynamic programming solution. That exchange, shared in a Hacker News thread this week, sits at the center of a debate the software engineering community is having right now: as tools like Claude Code, OpenAI Codex, and Gemini CLI generate more and more working code, does deep knowledge of algorithms, data structures, and computational theory still matter?
The historical counterargument is familiar. Developers stopped hand-implementing sort algorithms decades ago and the profession didn't collapse — it freed up attention for higher-level problems. Several commenters made exactly this point, framing today's shift as another step in a long line of abstractions, not a departure from something sacred.
The greedy-vs-DP anecdote complicates that comparison, though. The danger isn't that AI writes code instead of humans. It's that a developer without the relevant background can't catch the mistake. A confident wrong answer is harder to catch than no answer at all, and <a href="/news/2026-03-15-comprehension-debt-the-hidden-cost-of-ai-generated-code">the experience of evaluating AI output</a> turns out to require much of the same knowledge that writing the code from scratch would have demanded.
Simon Willison's recent piece on <a href="/news/2026-03-16-simon-willison-defines-agentic-engineering-software-development-coding-agents">"agentic engineering"</a> gives useful shape to where this lands. Willison defines agentic engineering as building software with coding agents that write and execute code iteratively, with humans steering at the level of judgment: what to build, which tradeoffs to accept, how to specify problems correctly, and whether the output can actually be trusted. He draws a sharp line between this and "vibe coding" — Andrej Karpathy's term, coined in February 2025 — which describes unreviewed LLM output shipped as finished product rather than treated as a first draft requiring scrutiny.
What runs through the HN thread isn't agreement on which CS topics are now skippable. The sharper concern is about what happens to the habit of critical engagement. If developers accept AI output without asking whether it's actually correct — not just whether it compiles and passes the obvious tests — the atrophy isn't in any particular skill set. It's in the posture of someone who needs to be right, not just fast. The greedy algorithm anecdote is a small example of what that gap costs.