Javier Tordable, CEO of Pauling.AI and former Google and Microsoft engineer, says Anthropic's Claude Code is effectively dead. His argument is blunt: Anthropic built a tool developers loved, then wrecked it with aggressive rate limits and quality cuts while keeping prices high.
AMD's AI director analyzed 6,852 session logs and 234,000 tool calls. His conclusion: Claude Code now makes what he called "Low IQ mistakes" and "cannot be trusted to perform complex engineering tasks."
Pro plan users report getting 30 to 60 minutes of work before hitting token caps. One developer on the $20 plan called it "virtually unusable right now." Another on the Max plan watched 70% of their session burn before finishing a single task. Users on X describe a model that stops mid-task without warning or rewrites entire files for no apparent reason. One developer spent four hours debugging with Claude Code, then switched to OpenAI Codex, which solved the problem immediately.
Tordable attributes the degradation to cost-cutting. He says Anthropic turned down the model's reasoning capacity to save on compute. Hidden context compression increased. Conversations get auto-compacted more aggressively now. That $200 Max plan, which he says was always heavily subsidized, sees tokens evaporate in bursts with peak-hour throttling on top. Developers are canceling and migrating to Gemini, GLM5, local deployments, or anything that won't rug-pull them mid-project.
This pattern is familiar. In 2023, users documented similar quality drops in GPT-4. Data scientist Lior S. ran controlled experiments showing measurable declines in code generation and math reasoning. OpenAI eventually admitted infrastructure changes caused the issues. Microsoft's Bing Chat got capability restrictions. Midjourney faced backlash when updates introduced stricter content moderation. The lesson keeps repeating: when you build on SaaS AI tools, the provider can change the product under you, and you have no recourse.