OpenAI is going multi-cloud. The company struck a deal with AWS to bring its models to Amazon Bedrock Managed Agents, a new service that amounts to "Codex in AWS." That description comes from Ben Thompson's interview with OpenAI CEO Sam Altman and AWS CEO Matt Garman, published Tuesday on Stratechery.

This was inevitable. Anthropic's multi-cloud availability fueled its rapid enterprise growth, while Azure exclusivity actively hurt OpenAI's reach with customers who want models on their existing infrastructure. An amended Microsoft-OpenAI agreement now kills Azure's exclusivity. Microsoft keeps first-launch rights and its license through 2032, but OpenAI can distribute anywhere.

Bedrock Managed Agents targets enterprises already running on AWS. The pitch: if your data lives in AWS, you get OpenAI's agent capabilities with built-in security and reduced complexity. It's distinct from AgentCore, Amazon's existing agent platform, suggesting AWS is building a tiered approach to agent deployment.

But the technical reality gets messy fast. Porting OpenAI's models to AWS Trainium chips means rewriting CUDA-based code for the Neuron SDK. Memory management has to be rethought from scratch. Quantization needs re-engineering too. Different hardware architectures can produce non-deterministic outputs, and the validation work alone is substantial.

Enterprise adoption has been bottlenecked by cloud lock-in. Companies won't shuttle sensitive data between providers just to use a specific model. Bedrock Managed Agents removes that friction for AWS shops, which is most of the enterprise world. Garman framed the opportunity as echoing the original AWS vision of putting powerful tools in front of builders.

The difference now is that AWS isn't the only game in town. Google pushes full integration. Anthropic runs everywhere. OpenAI needed this to stay competitive. And Microsoft gets a cleaner P&L by stopping revenue share payments to OpenAI, even as Azure loses its key differentiator.