A satirical RFC published in March 2026 is poking fun at one of the more absurd side effects of the LLM era: the em dash has become suspect. RFC 454545, authored by Jeff Auriemma (GitHub: bignimbus) and Janice Wilson and hosted as a GitHub Gist, proposes the Human Em Dash (HED) — a new Unicode code point (U+10EAD) visually indistinguishable from the standard em dash (U+2014) but encoded separately to certify human authorship. The document, written in the formal style of an IETF Request for Comments, also introduces the Human Attestation Mark (HAM, U+10EAC) and coins the term "Dash Authenticity Collapse" (DAC) to describe the growing unease among human writers whose legitimate stylistic choices are now routinely mistaken for AI output. The RFC parodies IETF conventions faithfully, including RFC 2119 MUST/SHOULD/MAY language and a request for IANA to establish a "Human Punctuation Registry" with "excessive documentation."

The satirical premise targets a genuine problem. Large language models including ChatGPT and Gemini have a documented habit of reaching for em dashes as a stylistic flourish, which has created a perverse situation: human writers who favored em dashes long before LLMs were mainstream now find their prose flagged or second-guessed as AI-generated. Hacker News commenters responding to the post surfaced the real friction here, with one user (orthogonal_cube) recounting pulling back on em dash use after colleagues questioned whether they had used AI assistance. Another commenter (PTOB) put it bluntly: "AI stole the em dash from my toolkit." The RFC's humor lands precisely because the underlying problem is real.

RFC 454545 draws directly from the tradition of technically-formatted internet humor. Hacker News commenter mmillin compared it to RFC 3514 — Steve Bellovin's 2003 April Fools' proposal at AT&T Labs Research for an "evil bit" in IPv4 headers to flag malicious packets — a template for jokes that use formal structure to make a genuine point. RFC 454545 accumulated 7 stars and 6 revisions in the days following its March 10 publication, with active comment thread discussion probing its security model. When commenter MasterMedo pointed out the obvious exploit — injecting HAM/HED replacement rules into a model's system prompt — a collaborator clarified the RFC "is meant to be more of a 'Please do not circumvent this for bad faith reasons'" than a strict technical standard, which itself illustrates the core tension: any human attestation mechanism is necessarily trust-based rather than cryptographically enforced.

That tension drew the sharpest commentary. Commenter orthogonal_cube noted that any stylistic marker LLMs stop using will simply be replaced by another, meaning the real issue is not punctuation but the <a href="/news/2026-03-14-optimizing-web-content-for-ai-agents-via-http-content-negotiation">broken social contract around passing off LLM output as human writing</a>. Commenter alico-cra went further: "This feels like it's an RFC that's really trying to get at the need for separating human writing from agentic output. That's a far bigger issue than an RFC." Both comments point to something the joke can only gesture at: as LLM output becomes ubiquitous, the burden of proof for human authenticity is quietly shifting — and it is humans, not the "intruding party," who are being asked to carry it.