Scott Abel makes a sharp observation in The Content Wrangler: the AI reliability panic isn't new. Shannon's 1950 Chess Paper Predicted AI's Flaws mapped the shape of this problem in 1950 when he wrote about teaching computers to play chess. The core challenge hasn't changed. Machines face too many possibilities, not enough compute, and have to make judgment calls anyway.

Shannon wasn't chasing perfection. He wanted machines to play "tolerably good" chess. That phrase should ring familiar to anyone watching LLMs generate confident nonsense. Fluency doesn't equal accuracy, but people keep treating them as the same thing. Psychologists call this "processing fluency," where people judge statements as true simply because they're easy to read. Shannon understood that machines arrive at answers by evaluating possibilities, not by knowing things. Modern LLMs do the same thing, one token at a time.

What actually matters for AI reliability, Abel argues, is signal quality. Shannon's chess computer needed signals to evaluate board positions. Modern AI needs structured content, metadata, taxonomy, versioning. All the unglamorous infrastructure that technical writers build.

Here's what that looks like in practice. Say an LLM answers a question about enterprise software configuration. It pulls from documentation that never specifies which version or which pricing tier a feature applies to. So it tells a user on the free plan that they can configure SSO, when that's an enterprise-only feature. The answer reads fluently. It sounds authoritative. It's wrong. If that same documentation had been tagged with version numbers and edition metadata, the model could have filtered or surfaced the right answer based on context. The hallucination wasn't a model problem. It was a content problem.

This is where technical writers become more important, not less. Their job is to make context explicit so the machine can work with what the content actually says, not fill gaps with plausible-sounding guesses.