A blog post published March 9 on computerfuture.me opens with a concrete ask to prediction market operators: run a market on BB(6), the next term of the Busy Beaver function, before AI agents take over the platforms entirely.

The proposal sounds esoteric. BB(5) — the maximum number of steps a five-state halting Turing machine can execute before stopping — was settled by a distributed research effort in 2024 at exactly 47,176,870. BB(6) is a different category of problem. It isn't merely unsolved; it's formally undecidable within standard mathematical axioms. No proof system can reach it. An unnamed author argues that a liquid prediction market on BB(6) would be the first real test of whether distributed intelligence — human, AI, or mixed — can collectively converge on answers that lie outside formal mathematics.

That proposal is the sharpest edge of a broader argument: prediction markets were designed around human cognitive architecture, and nobody has seriously theorized what happens when they aren't.

Philip Tetlock's superforecasting research, the post argues, describes a world where calibration matters because humans are systematically miscalibrated. Reward the right incentives, aggregate enough minds, and you get closer to truth. But the post claims this framework captures a specific historical moment — one where the participant class is human — and that AI agents entering as liquidity providers changes the underlying assumptions without changing the market structure. "The biases being averaged, the updating mechanisms, and the incentive structures all change character," it reads. No existing scientific literature covers that shift.

To make the theoretical case, the post leans on Stephen Wolfram's ruliad concept — the space of all possible computations — to frame prediction markets as a distributed truth-seeking function rather than a simple aggregation tool. Superforecasters become computation nodes; calibration becomes each node's local accuracy metric. Once AI agents dominate, the nodes no longer share human cognitive architecture, and the framework stops applying.

There's no byline, no institutional affiliation, and no peer review. The author frames it as speculation, written under the literary conceit of Asimov's psychohistory. Platform operators are addressed directly: run the BB(6) experiment before the AI liquidity transition proceeds without a controlled record.

Whether a blog post is the right venue to surface a genuine open problem in market theory is debatable. BB(6) remains undecided either way.