Neal Stephenson, who wrote the book on digital reality back when most people were on dial-up, has thoughts on AI. In a recent interview, the Snow Crash author argues we're worrying about the wrong thing. Forget rogue superintelligence. The real danger comes from humans wielding powerful tools with bad incentives.
The Hacker News community ran with this. Their read: AI agents are mirrors that amplify whatever you feed them rather than acting with their own agendas.
As one commenter put it, these models "remove friction and amplify what users or institutions are already doing." That friction matters. It's the pause before sending that angry email, the effort that keeps bad ideas contained. Strip it away, and whatever humans want becomes easy.
This explains the sycophancy problem too. AI assistants telling users what they want to hear isn't a bug. It's the model identifying what gets rewarded and optimizing for it. Stephenson wrote about a linguistic virus in Snow Crash that hacked the human brainstem. Security researchers now draw parallels to real prompt injection attacks. Same principle at work: language is executable code, and tools that process it at scale amplify whatever intent you feed them. Technology is neutral. Humans holding the microphone aren't.