Petko D. Petkov, writing on ChatBotKit's editorial blog, makes a case that most AI engineering teams would rather not hear: the mathematics that constrain distributed computing apply equally to multi-agent LLM systems, and no amount of infrastructure spending changes that. The piece invokes Amdahl's Law — which puts a hard ceiling on parallel speedup whenever any sequential work remains — alongside the Universal Scalability Law, FLP impossibility, and the CAP theorem to argue that agent swarms face the same provable limits that distributed systems researchers identified decades ago.
The more uncomfortable claim is that those limits bite harder with AI agents than with conventional systems. Two CPUs sharing a memory bus pass data at billions of operations per second. Two LLM agents pass natural language — ambiguous, lossy, and expensive, where a single misinterpretation can corrupt an entire reasoning chain rather than flip a recoverable bit. Under the Universal Scalability Law, that overhead isn't just a constant drag; synchronization costs grow quadratically as each node must reconcile with every other.
The prescription will be familiar to anyone who has read Conway's Law or spent time in microservices: orchestrator-based tree architectures with tightly scoped subtasks and minimal shared state, not all-to-all swarms where every agent must stay aware of every other. Petkov draws the parallel to how human organizations evolved toward small, bounded teams — positioning the lesson as architectural rather than technological, and the implication for the agent platform market as clear: smarter decomposition beats raw agent count.
One caveat worth noting: Petkov is building cbk.ai while writing in what he describes as a personal capacity, and ChatBotKit's related editorial output — including a teased piece titled "Agent Infrastructure Is Not the Hard Problem" — suggests a deliberate positioning against prevailing industry hype. The analysis stands on its own merits, but readers should weigh the source.