OpenAI CEO Sam Altman published a formal set of guiding principles on April 26. Five of them. Democratization, empowerment, universal prosperity, resilience, and adaptability. The document argues that future power "can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralized way by people." OpenAI positions itself firmly on the side of distribution over consolidation. The principles have drawn considerable attention.

The universal prosperity principle has an obvious partner: Worldcoin. Altman co-founded the cryptocurrency project in 2019, and its mission to build a global financial network for distributing universal basic income lines up neatly with OpenAI's call for "new economic models" that let everyone participate in value creation. This principle also explains some of OpenAI's otherwise odd business decisions. The massive compute purchases relative to revenue. The push to build data centers worldwide. Altman has publicly connected these ideas before, arguing that as AI reduces the need for human labor, mechanisms like Worldcoin could help redistribute economic benefits.

Hacker News commentators spotted the tensions fast. The democratization principle commits to ensuring "key decisions about AI are made via democratic processes." Yet OpenAI has moved from its original open-source approach to keeping models proprietary. One user recalled OpenAI's previous pledge to stop competing with any safety-conscious project approaching AGI first. They expressed doubt the company would follow through. Others pointed out the complete absence of explicit prohibitions on autonomous weapons, mass surveillance systems, or cyberwarfare capabilities. The principles talk about minimizing harm and "erring on the side of caution." But they don't draw any hard lines. Backlash to the principles from the community has been significant.

A company positioning itself as the great democratizer of AI is simultaneously consolidating unprecedented power. The principles acknowledge this tension. They don't resolve it.

The document is candid about one thing. These principles will shift. The adaptability principle acknowledges that OpenAI may need to "trade off some empowerment for more resilience" as circumstances change, and promises transparency about when and why operating principles evolve. Given how much has changed since OpenAI once worried about releasing GPT-2's weights, that flexibility might be the most honest part of the whole statement.