Somewhere in 2024, a threshold was crossed: automated bot traffic surpassed human traffic on the internet for the first time, according to Cloudflare data. In the opening months of 2025, RAG-based agent traffic — the kind generated by AI systems retrieving and acting on live information — grew 49%. The primary consumer of APIs is no longer a developer staring at documentation. It's software, running headless, at scale, with no one watching.

API infrastructure hasn't caught up. That's the argument Apideck makes in a new guide on what it calls 'agent experience,' or AX — a design discipline for APIs that need to work when the other end of the connection can't Google an error message or intuit an undocumented enum.

The problems Apideck identifies aren't exotic. They're mostly gaps that don't matter when a human is in the loop but become hard failures for autonomous systems. Vague OpenAPI descriptions — technically valid but semantically thin — cause agents to mis-route requests because they're matching on intent, not memory. Error responses without machine-readable fields like doc_url leave agents with nothing to act on. Stripe already includes doc_url in its error payloads; most APIs don't. Recovery metadata like is_retriable and retry_after_seconds is similarly absent from most specs, which means an agent hitting a transient failure has no signal about whether retrying is safe or pointless.

Authentication is its own category of breakage. Browser-based OAuth redirect flows assume there's a human ready to click through a login screen. There isn't. API keys and OAuth client credentials grants are the practical alternatives — both well understood, but not always the easiest path for integrators.

The rate-limiting problem is subtler. A human developer notices when they're being throttled and adjusts. Agents don't notice until they've already cascaded into a spiral of failed retries. Proactive rate-limit headers and bulk endpoints — both straightforward to add — would give agents the signals to manage themselves.

The sixth recommendation is getting the most attention: adoption of the llms.txt standard, championed by fast.ai's Jeremy Howard, which provides a structured, Markdown-based documentation layer designed for LLM parsing rather than human reading. Alongside Anthropic's Model Context Protocol and richly annotated OpenAPI specs, llms.txt is becoming part of the infrastructure that determines whether an API is reachable by agents at all — a discoverability layer for the automated web that is quietly sorting providers into those agents can reliably call and those they effectively cannot reach.

Apideck's open-source Portman CLI, used for OpenAPI contract testing, is a useful diagnostic here: if a spec is too thin for automated contract testing, it is almost always too thin for agents, for the same underlying reasons.

The business case Apideck makes is that AX and DX improvements largely overlap — richer specs and cleaner error responses help human developers too. That's probably true. But the competitive pressure isn't really about quality. It's about access. As agents absorb more of the integration layer, the APIs they can reliably call will capture traffic; the ones they can't will be effectively invisible, however polished the developer portal.