Tech writer Daniel Miessler published a satirical essay on February 23, 2026 titled "Why I Hate Anthropic and You Should Too," using ironic framing to defend the AI safety company against influencer-driven criticism of its Claude MAX subscription model. The piece catalogues complaints circulating in developer communities — that Anthropic poorly communicated the intended use case for its MAX tier, cracked down on users treating the subscription as a cheap workaround to direct API pricing, and that CEO Dario Amodei's earnest communication style compares unfavorably to smoother-talking rivals. The satirical narrator presents these grievances as damning, which is precisely the rhetorical trick: by the end, the triviality of the complaints has been made unmistakable.
The essay's substantive argument emerges through contrast. While the narrator grumbles about token budgets and API pricing, Miessler quietly enumerates Anthropic's actual institutional record: the company publicly <a href="/news/2026-03-14-pentagon-anthropic-claude-military-red-lines">opposed Pentagon requests to weaponize its tools</a>, took the politically costly position at Davos that granting China access to frontier models and advanced AI chips was "fucking stupid," and has been more forthcoming than any major competitor about the disruptive risks of the technology it is building. Miessler frames these as boring and uncool — precisely because the satirical target is an audience that would rather those positions not complicate their access to cheap inference. The essay does not name competitors directly, but the implicit comparison to labs offering "smoother" leadership and "half as expensive" models is legible throughout.
The underlying tension in the piece is simple: Anthropic charges more and restricts more because it was built to. In 2021, Dario Amodei, Daniela Amodei, and several colleagues left OpenAI specifically over safety governance disagreements, then founded a company structured around that disagreement. Higher API costs and stricter usage policies follow from that founding logic, not from pricing strategy. Developers who want cheap inference are, in effect, asking Anthropic to be something it was explicitly created not to be. The "I'm not Gandhi; I'm trying to build a SaaS app" line the essay deploys as its punchline captures that fault line cleanly. Miessler's conclusion — that Anthropic is probably the <a href="/news/2026-03-14-anthropic-institute-societal-economic-governance">most ethically serious major AI lab operating today</a> — lands as a straightforward assertion once the satirical scaffolding is removed.