Literary scholar Nan Z. Da has published a philosophical critique of large language models that works from inside machine learning's own theoretical vocabulary. In an essay titled "Literary Criticism in the Age of AI," excerpted on Ethan Miller's HumansCodes blog, Da draws on Vladimir Vapnik's 1995 concept of "transductive inference" — inference that moves from particular to particular, bypassing general principles entirely — to name what she sees as a structural problem with how LLMs work. A wide range of cognitively distinct tasks — reading comprehension, translation, summarization, moral reasoning — have been collapsed into a single engineering paradigm: next-word prediction.
The philosophical anchor is John Locke. Da reads his empiricist framework as placing inference at the very foundation of civil society: for Locke, justice is a chain of ideas connecting guilt to punishment through intermediate concepts. That framing sharpens into a specific, uncomfortable question. What happens to courts, schools, bureaucracies, and media when the inference machines staffing them can simulate the surface of comprehension but have no purchase on inferential validity or variety? Locke worried about unregulated inference as a source of bias; Da argues the same pathology is being industrialized.
The ethical weight of the essay rests on an asymmetry. AI systems cannot suffer the downstream effects of their own errors — they cannot be harmed by wrongful inferences or faked comprehension. The burden of consequence falls entirely on humans, on the bodies and lives touched by <a href="/news/2026-03-15-tech-executive-chatgpt-cancer-vaccine-dog">confident outputs that bypass genuine understanding</a>. Da grounds the argument not in generalized alarm about automation but in Vapnik's own statistical learning theory, where transductive inference is structurally incapable of the kind of generalization civil institutions require. The Hacker News discussion, which Miller seeded when he published the excerpt, kept returning to the same question: how could next-word prediction ever approximate moral judgment.