Stay SaaSy published a six-point advisory this week arguing that late 2025 was a real turning point for AI tooling in software teams — and that most engineering managers have not caught up. Written by the blog's anonymous SaaSy EM and published March 13 at [staysaasy.com](https://staysaasy.com/management/2026/03/13/management-in-the-age-of-ai.html), the post is anchored by an argument with no real precedent in management literature: consumption-based AI budgets are now a live operational problem, and nobody has worked out how to handle them.

Every engineer now has a token spend — LLM compute costs per person per month. When a strong performer burns through their allocation mid-sprint, what does a manager do? Extend the budget? Pull from a team pool? Let them grind to a halt? The author offers no clean answer, which is the honest position; it's a resource allocation question that didn't exist two years ago and lacks the institutional muscle memory that headcount decisions or hardware quotas have built up over decades.

The other five shifts cover familiar ground but with sharper edges. Managers need to become hands-on builders. Without direct fluency in AI tools, you can't set realistic output expectations or coach your reports. The analogy the author reaches for: a 2015 engineering manager who didn't understand the internet. In many cases, the right call is to build the thing yourself rather than spend the afternoon running cover for your team's calendar.

Output expectations have to rise commensurately. "I ran out of time" is a harder excuse to sustain when engineers can compress weeks of work into days. Fuzzy direction — which used to get resolved through slow iteration and natural feedback loops — now accelerates to failure. Teams can ship the wrong product entirely before the misalignment surfaces.

Deliberate collaboration needs to be engineered in as every team member runs parallel agent-driven workstreams. Without it, the product starts to feel incoherent — "built by five different companies," as the post puts it.

The hiring argument is pointed: the gap between a strong engineer and a weak one using identical AI tools is not 2x but 100x. One compounds value; the other compounds slop. That has direct implications for hiring bars — and for how managers evaluate the people already on their teams.