Software developer Sebastian Aigner published a post on March 14, 2026 making a case that most critics of LLM-assisted writing miss: the problem isn't grammar or style, it's that people use each other's natural writing patterns to understand who they're actually talking to. Over time, word choices, mistakes, and tonal quirks form what Aigner calls an "atlas" — an implicit map of the sender that lets you read messages in full context. Run that through an LLM and the map goes blank. He calls this the "social handshake component" of communication, and argues that imperfect language isn't noise to be cleaned up — it's signal, carrying personality, emotional state, and intent that a polished rewrite strips out.

The argument landed on Hacker News with concrete workplace backing. One commenter described colleagues at their company using ChatGPT for internal Slack messages, which pushed the team to draw a formal line: Grammarly for minor fixes on messages going to external recipients is fine, but LLM polishing of internal messages is not. Another commenter named Claude specifically — not as a cleanup tool but as the thing colleagues use to compose entire Slack messages from scratch — and said the habit had become a serious enough pet peeve to make them reconsider text-based communication with those people entirely.

A third commenter pushed the critique further. The real loss, they argued, isn't fluency — it's the collapse of the high "signal-to-token" ratio that makes human writing worth reading, including the apparently extraneous bits that reveal how someone actually thinks. They added a harder problem underneath that: once an LLM intermediary is in the loop, there's no way to tell genuine communication from automated mimicry, which quietly corrodes the trust that interpersonal exchange runs on. At that point, you're not talking to a colleague — you're talking to their editor.