OpenAI just published its playbook for the AGI era. Sam Altman laid out five principles he says will guide the company: Democratization, Empowerment, Universal prosperity, Resilience, and Adaptability. The headline promise is that OpenAI wants to "put truly general AI in the hands of as many people as possible" rather than letting a few companies control superintelligence. It's a nice vision. The principles also acknowledge that good outcomes aren't guaranteed and that OpenAI may need to "trade off some empowerment for more resilience" as things get harder.
But the gap between these principles and OpenAI's reality is hard to ignore. The Democratization principle says OpenAI will "resist the potential of this technology to consolidate power in the hands of the few." Yet OpenAI's entire compute and distribution infrastructure runs through Microsoft Azure under an exclusive multi-year deal. One power center is being replaced by another. Critics on Hacker News were quick to point this out, with one commenter noting the hypocrisy of preaching democratization while moving from open research to closed models.
The principles are also silent on military applications. There's no explicit prohibition on autonomous weapons, mass surveillance, or cyber-warfare. Given that OpenAI has already walked back its ban on military work, that omission feels intentional. And the old commitment to stop competing if another safety-conscious project approached AGI first went unmentioned.
Altman writes that OpenAI "deserves an enormous amount of scrutiny." He's right. These principles are a framework for accountability, and the framework has some obvious holes. What matters now is whether OpenAI's actions match its words when those words get expensive to keep.