AMD's AI director ran the numbers. 6,852 session logs revealing a drop in reasoning depth. His conclusion: Claude Code "cannot be trusted to perform complex engineering tasks." That's not venting on Twitter. That's analysis backed by real data.
Javier Tordable, CEO at Pauling.AI, has had enough. His blog post "Claude Is Dead" pulls no punches. Paid users get 30 to 60 minutes before rate limits kick in. On the $20 Pro plan, simple features trigger instant limits. Max subscribers watch sessions die before completing anything useful. One developer spent four hours debugging with Claude Code, then watched OpenAI Codex solve the same issue in one shot. Reports indicate invisible tokens are consuming more quota. Tech influencer @clairevo posted that "it does seem like claude code got a little dumber... Feels a little less proactive." People are leaving for Gemini and alternatives like GLM5.
Tordable's sharpest accusation is that Anthropic quietly turned down the model's thinking capacity and cranked up hidden context compression to save on GPU costs. Third-party agents like OpenClaw got kicked off flat-rate limits. The company's deep dependency on AWS infrastructure (tied to Amazon's $4 billion investment) may explain the compute squeeze. Cold comfort for developers who built real workflows around a tool that now makes what the AMD analysis called "low IQ mistakes."
Anthropic needs to address this head-on. The exodus continues until they do.