Juan Cruz Martinez has spent 20 years writing software. He turned 40 this year, which means — in his own reckoning — he has roughly 30 more years of working life ahead of him. He'd like to have a plan. He no longer does.

Writing in his newsletter The Long Commit, Martinez lays out why this AI cycle feels different from every previous one he's lived through: the shift from Java to JavaScript, the move to cloud, the rise of DevOps. Each of those waves changed how engineers worked. None of them changed how many engineers a company needed. This one might.

The inflection point was Claude Code — Anthropic's agentic coding tool. When the output stopped looking like a rough draft and started looking like production-quality code he himself would write, the question he'd been avoiding became unavoidable: if code generation is no longer the constraint, what is the experienced engineer actually being paid for?

His answer is judgment — the ability to set direction, catch errors in reasoning before they become errors in production, and hold context that a code-generating agent doesn't have access to. That argument isn't new. What Martinez adds is a sharper diagnosis of where the real risk lives: not in what AI can actually do today, but in the gap between that and what CEOs think it can do. Executives are making headcount decisions based on blog-post-level understanding of agent capabilities. Teams are being cut not because AI has replaced those roles in practice, but because leadership has bought the narrative that it will. The technology is almost beside the point.

His response is a three-part bet: invest in judgment and domain depth that doesn't commoditize easily, keep writing code as a way of developing thinking rather than shipping output, and build a professional reputation that travels — one that isn't hostage to a single employer's assessment of how many engineers they need in 2027. None of it is guaranteed to work. Martinez doesn't pretend it is. What he's describing is a hedge: a way of staying positioned in a market where the decisions most likely to end your career are being made by people whose understanding of AI comes from keynotes, not from using the tools.