Martin Lumiste, an Estonian technologist, published a post this week making a straightforward if uncomfortable argument: OpenAI's own founding charter requires it to stop competing.

The 2018 document — still live at openai.com/charter — commits OpenAI to "stop competing with and start assisting" any project that is value-aligned, safety-conscious, and close to building AGI first. The stated trigger: "a better-than-even chance of success in the next two years."

Lumiste builds his case largely from OpenAI's own leadership. He traces Sam Altman's AGI timeline predictions, which have compressed sharply: from "within the next ten years" in May 2023 to a February 2026 claim that OpenAI had "basically built AGI." Altman walked that back shortly after, describing it as "a spiritual statement, not a literal one." The median of Altman's public predictions since 2025, Lumiste calculates, lands at roughly two years — the charter's threshold exactly.

The competitive rankings sharpen the point. A snapshot of the arena.ai model leaderboard cited in the post places Anthropic's Claude Opus 4.6 and Claude Opus 4.6 Thinking first and second overall, Google's Gemini 3.1 Pro Preview third, and OpenAI's GPT-5.4 sixth. Lumiste's conclusion: the triggering conditions aren't approaching — they've been met.

He's not filing anything. The post is less interested in forcing OpenAI's hand than in what the charter's existence reveals. On that front it lands: the document remains publicly hosted, unedited, while the company races Anthropic and Google for market position. Altman's retraction of his own AGI declaration follows a familiar pattern — as rivals close in, definitions shift. The industry has already migrated the conversation to ASI, treating AGI as a settled matter without ever quite saying so.