A new community etiquette manifesto published at stopsloppypasta.ai has coined the term "sloppypasta" to describe the workplace habit of copy-pasting raw output from large language models like ChatGPT or Claude directly into Slack threads, emails, and shared documents without reading, editing, or verifying the content first. The term blends "slop" — already-established slang for low-quality AI-generated content — with "copypasta," the internet term for text copied and forwarded without critical thought. The manifesto was developed by Alex Martsinovich, who previously authored an essay titled "It's rude to show AI output to people," and Blake Stockton, who had independently written an "AI Writing Etiquette Manifesto." It hit the front page of Hacker News, where a long comment thread formed from people who had experienced the behavior but had no name for it.
The manifesto's core argument centers on an effort asymmetry: LLMs have made generating text effectively free in terms of human time, but reading and verification remain just as costly as ever for the recipient. Martsinovich frames it directly: "For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity." When someone pastes raw LLM output, the site argues, they are <a href="/news/2026-03-15-comprehension-debt-the-hidden-cost-of-ai-generated-code">offloading their skipped cognitive work</a> onto whoever receives it — and as incidence increases, the cumulative frustration compounds. Stockton adds that "a polished AI response feels dismissive even if the content is correct" — a problem of interpersonal trust that shows up even when the AI got the facts right.
The manifesto identifies three archetypal offenders: the "Eager Beaver," who floods ongoing discussions with generic AI responses in an attempt to be helpful; the "OrAIcle," who treats chatbot output as authoritative expert answers to specific questions — the enshittified LLM-era equivalent of LMGTFY; and the "Ghostwriter," who presents AI output as personal research, borrowing their own credibility to vouch for content they have not actually vetted. The site also raises a hallucination risk: because LLMs generate text with the tone and confidence of an expert <a href="/news/2026-03-15-statgpt-imf-chatgpt-wrong-66-86-percent">regardless of accuracy</a>, recipients have no way to gauge the sender's actual understanding of a subject, breaking the previously functional "trust but verify" norm of written communication.
To counter these patterns, the manifesto proposes five rules: Read the LLM output before sending it, Verify its factual claims, Distill it down to only what is relevant, Disclose that AI assisted in drafting, and Share only when the content was actually requested. The site places these rules in a broader conversation about AI literacy, citing AI commentator Simon Willison alongside the founders. The five rules are not a ban on AI tools — they put the cognitive labor back on the sender, where Martsinovich and Stockton argue it belongs.