Hacker News quietly updated its community guidelines last week to ban AI-generated and AI-edited comments. The new rule, added to the "In Comments" section of the guidelines page, is blunt: "Don't post generated comments or AI-edited comments. HN is for conversation between humans."

The Y Combinator-run forum has long prided itself on substantive technical discussion, and the policy codifies what many users had been asking for. The associated discussion post pulled 4,192 points — about as close to consensus as HN gets.

Two arguments dominated the thread. The first is about effort asymmetry: writing has always cost more than reading, and that gap creates an implicit contract between authors and their audiences. When someone fires off a GPT-generated reply, they've dodged that cost while still burning the reader's time. The poster spent thirty seconds; you spend the same thirty seconds reading it as you would have spent reading something the person actually wrote.

The second cuts at something bigger. The popular line about LLMs containing "the sum of all human knowledge" gets it backwards, multiple commenters argued — they produce the average of it. A model optimized for statistically likely outputs doesn't generate insight; it generates the middle. Several users described this as a slow homogenization of online discussion, one calling it a "heat death of thought" for communities that exist specifically because someone found something genuinely curious.

What the new policy doesn't include is any explanation of how HN plans to enforce it. There's no mention of detection tools, no stated penalties, no review process. The rule sits alongside "Be kind" and "Don't be snarky" — guidelines that depend entirely on users choosing to follow them.

That's not an accident. HN's moderation has always run lean, primarily through moderator Daniel Gackle (known as "dang"), and the daily comment volume makes systematic screening impossible. Commercially available AI-detection tools like GPTZero and Originality.ai carry well-documented false-positive rates. The "AI-edited" category the rule explicitly bans is even harder to pin down — there's no reliable way to distinguish it from a human who revised a sentence twice.

So HN is betting on social pressure. The 4,192 upvotes function as a public commitment: a large chunk of the active user base declaring, on the record, that they consider AI-generated comments a violation. That's worked for other norms on HN — the platform's relatively high friction and strong user identity continuity mean people tend to care about their standing. But as LLM prose gets harder to distinguish from fluent human writing, peer enforcement hits a hard limit: you can only call out what you can see. Without disclosure from the poster, neither moderators nor other users can reliably spot a violation.

The policy may hold anyway — not because HN can catch cheaters, but because the kind of person who posts there probably doesn't want to be known as one.