Martin Fowler says coding agents have a laziness problem. They don't have enough of it. Cantrill: LLMs lack the programmer's real virtue, laziness. Larry Wall, creator of Perl, called laziness one of the three virtues of a programmer. Good developers hate repetitive work, so they build abstractions that make future work easier. Fowler loves this. Finding the right abstraction is his favorite part of programming. But LLMs don't care. "Work costs nothing to an LLM," Fowler writes. They'll dump code onto a "layercake of garbage" without a second thought. The result is bigger software, not better software. Fowler calls this "technical, cognitive, and intent debt," systems that grow larger without growing more maintainable. Moving fast often leads to this kind of technical debt.
He almost fell into the trap himself. While modifying a music playlist generator, he got frustrated and considered handing the task to a coding agent. Instead, he stepped back and applied YAGNI (You Ain't Gonna Need It). The solution dropped to a couple dozen lines. Would an agent have done the same? Or would it have built something needlessly complex that he'd approve with a lazy "LGTM"?
There are ways to push back. Jessica Kerr applies test-driven development to agent prompting. Set up verification before changing instructions. Add a reviewer agent to check PRs before instructing the coding agent to make changes. Treat agent behavior like code that needs tests. The deeper problem, raised by Fowler's colleague Mark Little, is that AI systems are built for decisiveness. Given input, produce output. Given ambiguity, resolve it. But sometimes the right answer is doing nothing. Little references the sci-fi film Dark Star, where a crew member teaches a sentient bomb to doubt its own sensors. Fowler puts it plainly: if we want AI systems that operate safely without constant human oversight, we need to teach them when not to act. In autonomous systems, restraint might be the most important capability we build.