The Dead Internet Theory has a new name: Tuesday.
Adrian Krebs didn't set out to write a manifesto. The software developer published a blog post on March 11 about something that had been bothering him — a job applicant whose reply was so obviously AI-generated it read more like a form letter than a human response. That small, irritating moment turned into something longer as he started listing everything else he'd noticed. By the end, he had a case study in what 'Dead Internet Theory' actually looks like in practice.
The theory itself isn't new. It's been kicking around online communities for years: the idea that bots and automated content have overtaken human activity on major platforms, and that most of what we encounter online is machine-generated noise rather than real people talking to each other. What Krebs adds is specificity. He's not making a philosophical argument. He's pointing at things happening right now.
Hacker News has started restricting ShowHN posts from new accounts — too many low-effort, AI-generated projects being submitted for attention. The site's guidelines now explicitly prohibit AI-written comments, with a blunt line that 'HN is for conversation between humans.' On Reddit, bot accounts are running coordinated promotions for SaaS products, posting hundreds of near-identical comments while deliberately hiding their comment history to avoid detection. LinkedIn's feed has become largely unreadable if you're not there for AI-polished takes on professional growth. On GitHub, open-source maintainers are dealing with nonsensical pull requests submitted by autonomous bots — and in some cases, those PRs are being reviewed by other bots, a closed loop that burns maintainer attention without producing anything.
What makes this uncomfortable for anyone working in AI is the obvious overlap. The same agent pipelines being built and sold as legitimate productivity tools — job application assistants, social media engagement bots, automated code contributors — are the ones producing this mess when pointed at open platforms. There's no clean line between a useful agent and a spam agent. The difference is mostly intent and target, and intent doesn't scale.
Platform operators are responding with new restrictions, but rules designed for human misbehaviour don't map well onto machine-volume output. One person posting spam can be banned. A thousand coordinated accounts cycling on rotation is a different problem.
Krebs ends his post asking whether a human-dominated internet is still recoverable. He answers himself: probably not. It's a bleak conclusion, and it's landing hardest with technical readers who recognise themselves in it — people building the pipelines while watching what those pipelines are doing to places they used to actually enjoy.