There's a tempting narrative that clean code doesn't matter anymore. Just let the AI agent write it, test the output, and move on. A recent essay on Yanist.com pushes back hard on that idea, and the argument is straightforward: coding agents have context limits just like humans have cognitive limits. When your codebase is a mess, the agent has to read more files, burn more tokens, and wade through more noise to get anything done. The bill goes up. The quality goes down.
Robert Martin's distinction between code value and structure is useful here. Value is obvious. Structure is a long-term investment that compounds. For AI agents, that investment pays off immediately. Well-organized code means the agent touches fewer files per task. A function that does one thing, with a clear name and typed inputs, means the agent doesn't have to cross-reference five other files to figure out what's happening. What helps a new hire get productive also keeps Claude or GPT-4 on track.
The catch is that agents don't automatically produce clean code just because your repo is clean. They mimic surface-level style without grasping architectural intent. The Feldera project tackles this head-on by maintaining a CLAUDE.md file that explicitly references books like Steve McConnell's "Code Complete" and Dustin Boswell and Trevor Foucher's "The Art of Readable Code." You have to tell the agent what "clean" means in your context. And you still have to review what it produces. That step isn't optional yet.
Something weirder is happening too. Humans and agents don't even want the same code structure. Humans like abstractions that hide complexity. Agents often perform better with flat hierarchies and explicit type definitions. Tools like TypeScript and Pydantic help because they reduce ambiguity. What a human developer might call verbose boilerplate can be the guardrail that keeps an agent from hallucinating bad data contracts. The codebases that work best in the agent era probably won't look exactly like what we'd write for humans alone. Your code is why AI agents keep failing