A viral Forbes article claiming Anthropic's $200-per-month Claude Code Max plan consumes $5,000 in compute costs per user has been challenged by AI economics blogger Martin Alderson, who argues the figure fundamentally conflates retail API pricing with actual inference costs. Writing on martinalderson.com, Alderson contends that while a heavy power user could theoretically rack up $5,000 worth of tokens at Anthropic's published API rates — $5 per million input tokens and $25 per million output tokens — those rates represent what Anthropic charges external customers, not what it costs Anthropic to serve those tokens internally.

To estimate real inference costs, Alderson benchmarks against competitively-priced open-weight models of comparable architecture on OpenRouter, where multiple providers compete on margin. Qwen 3.5 397B-A17B, served via Alibaba Cloud, is priced at $0.39 per million input tokens and $2.34 per million output tokens — roughly ten times cheaper than Anthropic's API rates. Kimi K2.5, a 1-trillion-parameter mixture-of-experts model, comes in similarly at $0.45 input and $2.25 output. Applying that 10x ratio, Alderson estimates that even the heaviest Claude Code Max users cost Anthropic roughly $500 per month in real compute, representing a $300 loss against the $200 subscription — not the $4,800 shortfall implied by the viral narrative. For the average user, Anthropic's own /cost command data suggests roughly $18 per month in actual serving costs against $20 to $200 in subscription revenue. The actual $5,000 figure, Alderson argues, applies to Cursor, which must pay Anthropic's retail API rates to serve Claude models to its own users.

The Hacker News thread brought real methodological criticism. Comparing Anthropic's proprietary models to Qwen or Kimi is imperfect: Chinese-developed open-weight models run unusually inference-efficient, and OpenRouter providers may apply aggressive quantization that doesn't translate to Anthropic's architecture. The louder pushback, though, was aimed at the original Forbes piece — commenters called the "inference is a money pit" framing an uncritical meme that journalism keeps recycling without distinguishing API pricing from actual compute costs. The more serious challenge to Anthropic's economics isn't serving tokens; it's amortizing training runs that cost $1 billion to $4 billion each across a subscriber base of roughly 2 million Claude Code users generating $2.5 billion in annualized revenue — thin coverage against the escalating cost of staying at the frontier.

Alderson is explicit on this: Anthropic is not profitable, and inference costs are not why. Training runs, researcher salaries, and multi-billion-dollar compute commitments are the real pressure. His narrower point — that labs' API pricing carries substantial markup over actual serving costs — has a direct consequence for third-party developers. Cursor pays Anthropic's retail rates. Anthropic does not. That structural gap drives the unit economics of the AI tooling market, and it isn't going away.