Alejandro Wainzinger has a name for something many engineers have felt but not quite articulated. He calls it 'agentic abuse' — and his blog post on the subject is striking a nerve.
The setup will be familiar to anyone working in a moderately pressured tech role. Coding assistants, autonomous task runners, multi-step LLM pipelines: the tools themselves are neutral. What Wainzinger describes is what happens when they stop being optional. Engineers are quietly expected — through performance reviews, sprint metrics, or the soft coercion of team norms — to use agents to absorb workloads that a healthy organisation would address by hiring more people or pushing back on timelines. The tools become a release valve for management dysfunction.
The accountability problem is where the argument gets sharp. When an agent produces bad output, the liability flows to the worker who ran it. The organisation that set the deadlines, structured the incentives, and normalised the reliance on automation faces no equivalent reckoning. Wainzinger calls the result 'productivity theatre': the metrics look healthy while the underlying conditions quietly rot.
What distinguishes the piece from standard AI-doom criticism is its focus on craft. Wainzinger isn't primarily worried about job loss in the blunt displacement sense. He's worried about something harder to measure — the erosion of judgment, skill, and professional autonomy that give engineering work its value. Delegating decisions prematurely to agents doesn't just produce worse software. It degrades the humans doing the work, concentrating the productivity gains at the organisational level while the people doing the actual labour absorb the risk and lose the plot.
The post is well-timed. Agentic AI adoption is accelerating across the industry and the labour dynamics are still being written. Wainzinger's framing — productivity tool as instrument of extraction — offers an early vocabulary for what may become a defining tension of the next phase of AI integration in tech.