OpenAI has given up on building its own data centers. The company's Stargate project, originally pitched as a $500 billion joint venture with Oracle and SoftBank to construct AI infrastructure, is now what OpenAI calls 'an umbrella for our compute strategy.' That's corporate speak for 'we're leasing instead of owning.' Projects planned for the UK and Norway have been paused or handed off to partners like Microsoft, according to Tom's Hardware.

A big retreat from where OpenAI was heading. When Stargate was announced, it looked like the company wanted to control its own compute destiny. Internal disagreements and the sheer cost of building data centers changed the calculus. Running your own facilities means dealing with power contracts, construction delays, hardware procurement, and all the headaches that come with physical infrastructure. Leasing lets OpenAI skip most of that.

But competitors are going the opposite direction. Meta is spending billions on custom data centers and designing its own chips, the Meta Training and Inference Accelerator, to train its Llama models. xAI partnered with Oracle to stand up the Colossus supercomputer cluster in Memphis for training Grok. Both treat infrastructure ownership as a competitive advantage worth the cost. Renting saves cash. It also costs control. If compute gets tight, OpenAI won't have its own facilities to expand into.