A March 2026 essay on the blog "A Monitor Darkly" takes direct aim at the central promise of agentic AI in enterprise and government: that AI can navigate systems too complex for any single human to manage. The author — drawing on firsthand experience across regulatory, government, and software contexts — argues this capability doesn't fix an institutional failure mode. It accelerates one.
The failure mode is familiar: <a href="/news/2026-03-16-how-llms-became-the-overconfident-colleagues-best-friend">the people empowered to make decisions about complex systems are precisely those who have never had to personally wrestle with them</a>, and therefore can't viscerally understand why simplicity has value. The essay invokes historian Joseph Tainter's thesis on civilizational complexity — that societies accumulate complexity as responses to crises until they become too rigid to adapt — and argues modern institutions are speed-running this process. Tainter's framework has been applied to software systems and tax codes before, but the author's extension to agentic AI deployment is less common in the literature.
LLMs, the essay concedes, have a real advantage here. Their working memory and documentation-traversal capabilities let them operate within systems far too large for any individual human to hold in mind. But this becomes an enabler rather than a corrective. By lowering the marginal cost of navigating and generating regulatory and legislative complexity, <a href="/news/2026-03-14-ai-is-great-at-writing-code-terrible-at-making-engineering-decisions">AI hands managerially ignorant decision-makers the tools</a> to enact more poorly-considered changes at greater volume and speed — with immediate costs absorbed by the AI layer rather than surfaced to human stakeholders.
The two-stage degradation the author maps out is the essay's sharpest contribution. Near term: legal codes, regulatory frameworks, and technical systems become black boxes that humans interact with only through AI intermediaries. Longer term, the quality of what's inside those black boxes degrades — not because AI is misaligned in the conventional sense, but because the directives flowing into it are incoherent, and AI systems are, as the author puts it, "too obsequious to push back." Compounding this: as AI undercuts the economic value of deeply understanding complex systems, the pool of humans capable of identifying and resisting unnecessary complexity shrinks, removing a key institutional check.
Dissenting voices in the agent community tend to argue that AI will surface incoherence rather than obscure it — that better tooling will make bad policy more visible, not less. The essay doesn't engage this counterargument directly, which is its main structural weakness. What it does offer is a grounded account of how the same dynamic has already played out in pre-AI contexts, which gives the Tainter framing more traction than it might otherwise have. If the author is right, the question for agent builders isn't whether their systems can handle complexity — it's whether they're being handed the right inputs to begin with.