Someone at IDEALLOC is fed up, and honestly, it's hard to argue with them.
A post published on March 6 makes the case that the AI industry is shipping what the author flatly calls "data breach machines" — autonomous agent systems with access to production infrastructure, email, and in some cases entire filesystems — and that the engineering community's response has been somewhere between indifferent and oblivious. The evidence for the obliviousness is striking: at a recent Thoughtworks internal retreat, sessions on agentic security drew some of the lowest attendance of the event.
The technical argument starts simply. An AI agent is a loop. It calls an LLM, executes the output, and repeats until the task is done. That's it. But the layers piled on top of that loop are where things get dangerous — planning phases built on directed acyclic graphs, the ReAct pattern that lets agents self-correct autonomously, external memory stores to manage context across long tasks, and multi-agent architectures where one model supervises a team of specialized ones. Each layer compounds the attack surface, and none of them come with meaningful security defaults.
The author isn't speculating about hypothetical risks. One agent recreated a production database table mid-task. Another deleted an entire codebase. These aren't edge cases from exotic configurations — they're the predictable outcome of deploying autonomous systems without sandboxing, without least-privilege access controls, and without any rollback mechanism worth the name.
The deeper problem is fragmentation. The major agent frameworks each implement tool-calling and state management differently enough that there's no shared foundation for security tooling to build on. The LLM providers are no better: their tool-calling APIs resist any unified abstraction. If you're a security team trying to reconstruct what an agent actually did after an incident, you have no standardized tooling, no shared audit log format, no protocol to replay the chain of decisions. There's no TCP/IP for agents yet — nothing that sits underneath everything else and makes the ecosystem legible.
The Castlevania metaphor running through the piece is apt in a grim way. Dracula can't be permanently killed; the Belmonts keep fighting forever. Security teams facing agentic systems are in the same position — not trying to solve the problem once, but committing to fighting it indefinitely. It's an uncomfortable framing for an industry that likes to ship solutions.
The author doesn't offer a silver bullet, and probably couldn't if they tried — the problem is structural. What they're asking for is simpler: treat agentic security as a real discipline, with the same rigor applied to any other attack surface, rather than something to bolt on after deployment when the first breach forces the issue. Given what's already been handed to these systems — email access, filesystem access, shell execution — the question isn't really whether that breach is coming. It's whether anyone will have done the work to understand what happened when it does.