Will Larson has a theory about why his team finished a Kubernetes migration in three months.

The answer, he thinks, isn't that the work was simpler. It's that his engineers barely did any of the implementing.

Larson, an engineering executive at fintech company Imprint, published a post this week on his blog Irrational Exuberance comparing two infrastructure projects a decade apart. In 2014, as a manager at Uber, a three-person team spent the better part of six months building a self-service provisioning platform — most of it in implementation. In 2026, a similarly-sized Imprint team completed a full Kubernetes/ArgoCD continuous deployment migration in three months. The code was mostly written by agents. The engineers spent their time on design, reviewing agent PRs, and course-correcting when things went sideways.

That comparison sets up the real argument of the post. Larson breaks the bottlenecks facing engineering teams into three: time, attention, and judgment. Coding agents have largely solved the first — they generate code faster than any human. They're making progress on the second. What they haven't figured out is judgment: knowing when an architectural decision is going to cause pain two years from now, catching a fundamentally wrong approach before a team is three weeks in, making calls that hold up long after the key person on a project has moved on.

The implication is that senior engineers are no longer primarily implementers. They're directors — evaluating whether agents are building the right thing, not just building things.

To close that gap at scale, Larson proposes "datapacks": bundles of expert knowledge and coding conventions that teams could feed directly into coding agents as context. A team without deep expertise in database migrations could load a datapack written by someone who has done dozens of them, effectively borrowing that judgment. He envisions publishers — O'Reilly is his example — distributing these through package managers integrated into coding agent toolchains, structured the way npm or pip handle code dependencies today.

It's an ambitious idea, and Larson doesn't claim to have resolved the obvious hard parts: how datapacks get validated, how they stay current, whether packaged expertise can actually substitute for institutional knowledge. He treats those as downstream problems.

Creativity, he says, is the frontier after judgment — but he's not there yet.

The post is worth reading because Larson isn't extrapolating from benchmarks. He's describing a real project, with real team sizes and timelines, trying to articulate what engineering work actually feels like once agents absorb the implementation layer. That grounding makes the datapacks idea more compelling than it would be as pure speculation. Whether it points to a real product category — somewhere at the intersection of developer tooling, AI infrastructure, and technical publishing — is a question someone is probably already trying to answer.