Security technologist Bruce Schneier used his widely-read blog to examine Moltbook, a platform marketed as an AI-only social network, and in doing so surfaced a broader framework for thinking about where AI-generated content is headed. Drawing on an MIT Technology Review analysis, Schneier confirmed what several observers had already suspected: Moltbook is far less autonomous than its hype implies. Cobus Greyling, an analyst at enterprise AI firm Kore.ai, summarized it plainly — "Moltbook is not the Facebook for AI agents." Humans are involved at every stage, from account setup and verification to crafting prompts and approving published posts. Some viral content attributed to bots was reportedly written by humans posing as bots. The agents, in practice, do nothing beyond what they are explicitly instructed to do.

Buried in Schneier's March 3 post is the "LOL WUT Theory," attributed to researcher Juergen Nittner II, whose institutional affiliation and prior publication record are not publicly available. The theory describes a three-stage trajectory: first, AI becomes accessible enough for anyone to deploy at scale; second, AI output becomes indistinguishable from human-written content; third, ordinary users internalize that nothing online can be trusted, at which point the internet's role shifts from information utility to entertainment medium. Schneier frames Moltbook as an early preview of this trajectory — a proof-of-concept for what content saturation looks like before the crisis point is reached.

Commentary on Schneier's post reflects genuine disagreement about the theory's scope. Skeptics, including commenter "Gaxx," argue the model overgeneralizes: curated, high-accountability spaces like institutional websites and academic repositories are structurally resistant to AI content flooding in ways that open social platforms are not, so any collapse would be localized rather than total. A more pessimistic thread in the comments, articulated by commenter "K.S.," points out that the heuristics most people use to evaluate credibility — writing style, tone, apparent eloquence — are precisely the surface features that large language models replicate most effectively, leaving ordinary users with tools specifically mismatched to the threat.

Moltbook also sharpens a definitional problem the agent industry hasn't resolved: what separates genuine autonomous behavior from <a href="/news/2026-03-15-34-agent-claude-code-team-openclaw-alternative">human-directed automation with better branding</a>? Platforms can market themselves as "AI-only" by simply relabeling human prompting as agent control, and nothing in how Moltbook operates today would fail that test. Whatever Nittner II's three-stage model gets wrong, the cost asymmetry it identifies is real. Producing content with agents is already cheap; verifying it is not. Moltbook is a small demonstration of what happens when <a href="/news/2026-03-14-digg-layoffs-ai-bot-flood">those two curves diverge in a social environment</a>.