Swiss software developer Adrian Krebs published a March 2026 blog post arguing that the "dead internet theory" — the long-circulating hypothesis that most online activity is bot-generated rather than authentically human — has crossed from speculation into documented reality. His evidence spans multiple platforms: an AI-generated interview reply that evaded his own detection filters, Reddit comment threads where bots astroturf SaaS products across hundreds of posts while concealing their activity, LinkedIn feeds overwhelmed by AI-generated content, and GitHub open-source repositories flooded with nonsensical AI-authored pull requests — some of which are reviewed by other AI agents. The piece functions less as a theoretical argument than a field report from someone building software in 2026.
The response from platform operators illustrates how seriously the problem is being taken. Hacker News, operated by Y Combinator and widely regarded as one of the higher-quality corners of online discourse, has restricted ShowHN submissions for new accounts and updated its community guidelines to explicitly prohibit AI-generated or AI-edited comments, framing the platform as "for conversation between humans." When a platform built around link aggregation and commentary now has to formally define what a human is, something has shifted.
The economic consequences for content-dependent platforms are already measurable. Stack Overflow, whose ad-supported business model depended on developers arriving via search after Google indexed its Q&A archive, has seen organic search traffic fall an estimated 35-50% year-over-year through 2024-2025, according to SimilarWeb data. Its parent company Prosus conducted a roughly 28% workforce reduction in June 2023, citing rapid AI adoption as a structural factor in its earnings announcement. The causal chain is straightforward: <a href="/news/2026-03-14-nyt-ai-coding-assistants-end-of-programming-jobs">AI coding assistants</a> like GitHub Copilot intercept developer questions before they reach a browser, while Google's AI Overviews synthesize answers directly in search results without generating click-throughs. Stack Overflow's attempt to monetize by licensing its content corpus to LLM vendors was widely interpreted as an acknowledgment that the traffic-based model was no longer viable — a structural irony, given that its 15-year archive of human-verified Q&A was foundational training data for the AI systems now substituting for it.
Commentary on the Krebs piece raises a competing interpretation: AI spam may function as a forcing function that renders large centralized platforms uninhabitable, pushing users toward smaller self-hosted communities. The personal blog and niche forum crowd has been predicting this for a decade. The structural pressure is real. The small web, though, has never absorbed a mass migration — and it is not obviously better positioned to do so now. By early 2026, <a href="/news/2026-03-14-optimizing-web-content-for-ai-agents-via-http-content-negotiation">autonomous agents</a> deployed across hiring, marketing, open-source contribution, and social influence have made bot-driven disruption visible to anyone trying to find a reliable answer online.