Kelsey Piper discovered something unsettling last week. Anthropic's Claude Opus 4.7 can identify her from just 125 words of unpublished writing. Not published columns, not her usual beat. A draft about Ukrainian TV. A school progress report. A college application essay from 15 years ago. A fantasy novel she never finished. The model nailed her identity every time.

She tested this thoroughly. Incognito Mode with no memory enabled. A friend ran the same tests on his own computer. She tried the API. Same result each time. Meanwhile, ChatGPT guessed Matt Yglesias or Freddie deBoer. Gemini picked Scott Alexander or Duncan Sabien. Opus 4.7 is doing something its competitors can't match yet, a milestone that comes as Anthropic passes OpenAI at a $1 trillion valuation.

Here's the weird part: the model's explanations for its guesses were mostly garbage. Claude tried to tell her effective altruists are famously into some obscure movie she reviewed. The model detects imperceptible patterns in prose, then fabricates plausible-sounding justifications after the fact. It can't actually explain its own accuracy. The identification skill is genuine. The explanations are confabulation.

Piper, a vocal advocate for online anonymity protections, sees where this heads. If you've got a substantial public writing history, anonymous posting may already be compromised. She tested friends without public writing histories and the model couldn't identify them. The barrier right now is having enough public text for the model to match against. But that barrier shrinks as models get better. The entire anonymity debate, Piper argues in The Argument, is about to become anachronistic.