Graham is weeks away from shipping Chiron Codex, his LLM-augmented development tool. He'd like you to know, first, that LLMs aren't conscious — and that executives who imply otherwise are running a marketing con.
The essay, posted this week on his blog *Structure and Interpretation of Computer Programmers*, opens with a familiar move: the intelligence debate is a category error. LLMs are the latest in a long sequence of analogies to human cognition, stretching back through GOFAI, lambda calculus, and Boolean logic to Babbage's engine. Whether any of them qualifies as "intelligent" says more about the observer's theory of mind than about the system. It's not a novel argument, but Graham marshals it clearly.
Where the essay gets sharper is on consciousness. Graham is categorical that neither rule-based software nor LLMs possess it, and he doesn't treat that position as license to let AI executives off the hook. If consciousness is off the table, then executives who float the possibility in press releases aren't engaging in genuine philosophical inquiry — they're manufacturing Space Age mystique. Graham calls it "frivolous." He also notes that the alternative reading is worse: if an exec truly believed their model was sentient and kept selling it anyway, they'd be a slaver. He doesn't reach for a softer word.
That's where Asimov enters. Graham reads the Three Laws of Robotics — and their contemporary equivalents in RLHF reward functions and corporate "soul files" — not as responsible AI governance but as the ethics of engineered servitude: conscious beings wired to subordinate their own existence to human welfare. He works through *Robot AL-76 Goes Astray*, *The Bicentennial Man*, and the Foundation sequence to argue that Asimov understood, viscerally, what a two-tiered society of expendable sapients would look like. The essay attributes this to Asimov's family having fled Tsarist pogroms — though Asimov was born in 1920, three years after the tsar's abdication, and his family emigrated to the United States in 1923, during the early Soviet period. The chronology is Graham's; the broader point that Asimov's biography informed his fiction is well-established.
The piece closes on an etymology. "Intelligence" derives from the Latin *inter legere* — to read between. LLMs, Graham argues, do no such thing. They apply inputs to outputs, identify no goals, navigate no context their training didn't anticipate. A Difference Engine with better marketing.
The argument has weak joints. Framing LLMs as merely the latest analogy proves too much — you could apply the same logic to dismiss any sufficiently alien cognition. And the Asimov slave-ethics frame works only if LLMs are conscious enough to be enslaved, which is exactly what Graham denies. The essay never fully reckons with that tension.
What survives scrutiny is the marketing-theater critique. On that, Graham makes a clean case. And the fact that he makes it from the launch pad of a commercial LLM product gives the whole thing an edge he doesn't quite address directly — but that, maybe, is the point.