A post titled "Please don't write about AI with AI" hit the front page of Hacker News this week. The argument is in the title: using ChatGPT, Claude, or Gemini to draft coverage of those same tools is a credibility problem editors should address, not an efficiency gain they should embrace.
The complaint isn't new, but the context sharpens it. AI-generated prose tends toward plausible-sounding language that papers over inaccuracy. A model trained on startup blogs and press releases doesn't know how to push back on vendor claims — it interpolates from them. The result is coverage that can launder industry narratives as independent analysis without any single person choosing to do so.
That failure mode differs from ordinary journalistic sloppiness. A distracted human reporter can still notice when a source's claims don't add up. A language model generating text about, say, a new model's benchmark performance has no mechanism to flag that the benchmarks were run by the company being covered, or that the metric was chosen because the model excelled at it.
The tech press's record here isn't clean. CNET published AI-generated financial explainers riddled with errors in early 2023 and quietly corrected them after readers noticed. Sports Illustrated ran articles under fake bylines the same year. More recently, <a href="/news/2026-03-14-buzzfeed-nearing-bankruptcy-after-three-years-of-failed-ai-pivot">BuzzFeed's three-year pivot to AI-generated content ended in near-bankruptcy</a>, with the company's stock collapsing and massive losses reported. These cases showed what happens when text generation gets decoupled from editorial accountability.
For AI journalism the conflict is more direct than on other beats. The credibility of coverage depends on readers trusting that a human mind engaged critically with the subject. Using the same category of technology being reported on to produce that reporting compounds every other problem on the beat: fast publication cycles, technically dense subject matter, and sources with hundreds of millions of dollars invested in shaping the narrative.
The post's framing — "please don't" — is a request, not a policy. Whether publications actually change anything is, so far, an open question.