Kyle Kingsbury, known for his work on distributed systems testing, has published a sprawling essay arguing that large language models are corroding everyday life. He sees synthetic slop everywhere: search results, doctor's offices, customer service, engineering pull requests. LLMs generate plausible lies at scale. We're drowning in it.
The essay draws an explicit parallel to cars. Everyone knew automobiles were fast and convenient. What they didn't anticipate was how cars would reshape cities, destroy communities, and create sprawl. Kingsbury thinks AI is doing something similar to our information environment, and most people aren't paying attention to the structural changes. He's personally scared. His skills, reading, thinking, writing, are exactly what language models target.
Kingsbury's prescription is blunt: stop using this stuff. Don't write with LLMs. Call out people who send you AI-generated content. Push your company to reject Copilot. Join a union. Call your representatives and demand regulation. If you work at Anthropic or xAI, quit. He acknowledges this won't stop AI development entirely, but argues that slowing it down matters. Every day of delay gives society more time to adapt to what's already been built.
He admits the tools have uses. He might ask Claude to write a client library for some obscure lighting protocol someday. But he's keenly aware of the slippery slope, the way convenience erodes capability. James C. Scott's concept of "metis," practical knowledge built through direct experience, gets harder to develop when you outsource the work. The essay ends on an uneasy note: Kingsbury knows he'll probably use AI eventually. He's just not happy about it.