Most people are confident they can spot AI-generated text. Slop or Not was built to test that confidence, and early players are finding it harder than expected.

The game appeared on Hacker News this week via a 'Show HN' post, spread quickly, and does exactly one thing: show you two responses side by side — one human-written, one AI-generated — and ask you to pick the fake. Three wrong answers ends the session. The content is drawn from Reddit, Hacker News, and Yelp, a choice that's deliberate rather than incidental. These are the platforms where AI-generated text has been accumulating most visibly — review spam, comment flooding, low-effort thread filler — and where users tend to assume they've already developed a natural filter.

The HN thread that surfaced the game was instructive. Several commenters reported breezing through the Yelp examples only to stumble badly on HN-style comments, where AI outputs have clearly been tuned to match the clipped, opinionated register of that community. Others flagged specific tells: absence of personal anecdote, over-smooth sentence rhythm, the particular cadence of hedged enthusiasm that LLMs reach for by default. Not everyone agreed on which signals actually worked.

That disagreement is worth noting. The game offers category filters that let players isolate by platform, and the results suggest detection skill doesn't transfer evenly across contexts. What reads as obviously synthetic in a Yelp review might pass unnoticed in a Reddit comment, depending on the subreddit. The format isn't rigorous — there's no methodology paper here — but it surfaces real variation in how well human intuition maps to the actual distribution of AI writing styles.

What separates it from established detection tools — GPTZero, Originality.ai — is the target audience and premise. Those products are built for institutional use: universities verifying essays, editors screening submissions. Slop or Not is testing something different and arguably more pertinent: whether ordinary users, in ordinary browsing contexts, would notice AI-generated content at all. The three-strike format, no account required, arrow-key navigation — the design is deliberately minimal, optimised for sharing rather than depth.

The broader point isn't subtle. Online text volumes have shifted sharply enough in the past year that the relevant question is no longer whether AI content exists in the wild but whether readers can tell when they're looking at it. A game that turns that into a scoreable challenge, however informal, at least makes the problem tangible — and personal.