Amazon is going all in on Anthropic. The company is investing $5 billion immediately, with another $20 billion possible. The catch? Anthropic commits to spending $100 billion on AWS over the next decade. The deal locks in up to 5 gigawatts of compute capacity, including access to Amazon's current and future Trainium chips for training and running Claude.

This is a silicon power play. Anthropic is shifting toward Amazon's custom Trainium chips instead of relying only on NVIDIA GPUs. Trainium is built for the transformer architectures that power large language models. Amazon says it delivers up to 2x better price-performance than comparable GPU instances for NLP workloads. If those claims hold up, it's a direct shot at NVIDIA's dominance in AI training infrastructure.

Project Rainier, the compute cluster at the center of this agreement, reportedly packs hundreds of thousands of Trainium chips. That's Amazon proving it can build custom silicon at scale. The full Claude platform will also sit directly inside AWS, letting customers use existing billing and governance controls.

Anthropic's annualized revenue has crossed $30 billion. That kind of growth demands serious infrastructure. The arrangement works for both sides: Anthropic gets compute, Amazon locks in a flagship AI customer for its custom chips. Neither company is being subtle about what this means for their respective ambitions.