Aphyr's latest essay makes an uncomfortable point: we have no cultural script for what LLMs actually are. Science fiction gave us sentient beings and god-like minds. What we got instead are "sophisticated generators of text which suggests intelligent, emotional, self-aware origins while the LLMs themselves are nothing of the sort." That gap between myth and reality isn't just interesting. It's dangerous. When people trust AI summaries of medical visits or mandate Copilot at work, they're applying the wrong story to the wrong technology. Better myths exist, Aphyr argues. John Searle's Chinese Room. David Chalmers' philosophical zombie. Peter Watts' Blindsight, where humans encounter unconscious alien intelligence that threatens us through indifference, not hatred. "I am concerned that ML systems could ruin our lives without realizing anything at all," Aphyr writes. The threat is incompetence at scale. And we're not prepared for it. The essay also looks ahead to new media forms beyond AI writing your emails. Aphyr calls that application "corrosive to the human soul" and sees bigger possibilities. Cooking "books" where a simulated chef watches you cook and gives real-time advice. Personality art installations. AI terraria where simulated personalities generate endless reality TV plotlines. The static written word might lose its dominance as the main way we transmit knowledge. These possibilities run straight into a power problem. Network effects and training costs could centralize LLMs under a few big players. Those corporations would shape what counts as acceptable expression, just as Facebook suppressed native names and YouTube's demonetization limits queer content. Hacker News commenters contextualize this within a century of corporate media manipulation, with one noting the essay reflects learned helplessness rather than resistance. Fair criticism. But naming the problem correctly matters. Understanding these systems as unconscious pattern-matchers rather than thinking machines is the first step toward any meaningful pushback.
The Future of Everything Is Lies, I Guess: Part 3 – Culture
Aphyr argues we're applying the wrong myths to LLMs. We expected conscious machines but got convincing text generators, and that mismatch is dangerous. Better frameworks exist from philosophy and speculative fiction. The essay also imagines new interactive media AI could enable and warns about corporate control over what these systems let us say.