Rohit Krishnan's essay "Epicycles All the Way Down" makes a case that should make anyone building AI agents uncomfortable. LLMs are pattern-fitters getting incremental improvements, what Krishnan calls "epicycles," without ever changing the underlying generator. The failures look less like science fiction and more like flash crashes. Weird and unexpected.
The math backs this up. There are more possible programs that could generate a pattern than there are patterns themselves. Training an LLM means swimming through an enormous space of possible generators, trying to find the "shortest, truest" one. Gold's theorem suggests this is essentially impossible if you only see positive examples. The model finds some program that fits the data, but not necessarily the one you meant. When Claude was asked about this, it offered a sharp insight: success is low-dimensional, with relatively few ways to be right. Failure is high-dimensional, with infinitely many ways to go wrong. Claude skills operate in this weird middle ground where their failures seem inexplicable because we're trying to map them onto human failure modes.
This explains why LLM errors feel so strange. They make mistakes in high-dimensional spaces we don't even think about. A deterministic system can look random because we can't track every variable, but it's not actually random. LLMs explore failure regions human cognition never reaches.
But the story isn't all skepticism. Hacker News commenters pointed out that LLMs recently produced multiple solutions to Erdos problem 1196 without human help, a problem that had stumped experts for years. Pattern-fitting at scale might not be understanding, but it's producing real results. The question for agent builders is whether you can build reliable systems on top of pattern-fitters, or whether you need something fundamentally different.