Security researcher Michał Zalewski, known online as lcamtuf, published an analysis in March 2026 examining how thoroughly AI-related content has come to dominate Hacker News. By sampling the daily top 5 stories throughout February 2026, Zalewski found that AI topics occupied the majority of slots on nearly every day of the month. Only three days — February 1, 9, and 25 — featured no AI story in the top 5, and even then AI content appeared just outside that window. On February 4 and 12, <a href="/news/2026-03-14-dead-internet-theory-ai-bots-online-platforms">AI stories claimed four of the five top spots</a>, with February 5 arguably seeing the full lineup dominated by AI-related content, including what Zalewski identified as covert vendor marketing in one of the slots.
For the second part of his analysis, Zalewski used Pangram, a conservative LLM-text detection tool, to identify which top stories were likely written by AI rather than humans. He addressed common skepticism around such detectors head-on, arguing that LLMs produce quasi-deterministic stylistic patterns — even if individual word choices feel human-like, the combination of mannerisms is statistically distinctive enough to flag reliably. After manually reviewing all flagged stories, he concluded that Pangram's results were accurate, with the tool likely having a few false negatives rather than false positives. A flagged example was the February 19 story "AI is not a coworker, it's an exoskeleton," which he assessed as exhibiting numerous AI-writing red flags.
The data is corroborated by independent research from Viktor Löfgren at Marginalia Search, who scraped HN's newcomment feeds in February 2026 and found that newly registered accounts were nearly ten times more likely to use EM-dashes and similar typographic symbols — a known LLM writing signature — compared to established accounts (17.47% vs 1.83%, p=7e-20). New accounts were also significantly more likely to mention AI and LLMs in their comments.
Zalewski's background is worth knowing here. As the creator of coverage-guided fuzzer American Fuzzy Lop (AFL), he has spent decades detecting statistically meaningful patterns in noisy output spaces — from passive OS fingerprinting to fuzz-testing binary inputs. He was also candid about the method's limits, flagging likely false negatives rather than overstating Pangram's precision. His analysis, published on his Substack "lcamtuf's thing," asks a question the HN community has so far largely avoided: if AI-generated content is already this hard to separate from human writing, what does moderation look like when generation costs drop further?