A fake reporter named Michael Chen emailed Nathan Calvin at the advocacy group Encode last week, seeking comment on an AI bill in Tennessee. The framing was loaded. The headline was already written. And Michael Chen doesn't exist. Tyler Johnston at Model Republic ran the message through Pangram, an AI detector with a near-zero false-positive rate. Fully AI-generated.

Acutus Wire, where Chen claimed to work, launched December 29, 2025. It has already published 94 articles on AI policy, Senate races, energy reform, and pharmacy regulation. The site calls itself "expert-sourced journalism" and an "independent" wire service. It's not. Johnston found Acutus' React source code sitting open in the browser, revealing buttons labeled "Generate Story Draft" and "Regenerate," fields for "AI Background Context," and an automated editorial review that scores pieces on AP style compliance and quote accuracy. The review process takes about 44 seconds. On 42 stories, the AI reviewer flagged the piece as "needs revision." They published them anyway. Of the 94 articles, 69% came back as fully AI-generated. Another 28% were partially generated. Three were human-written.

The money trail leads somewhere uncomfortable. Johnston traced Acutus to OpenAI's super PAC, "Leading The Future," and Republican PR firm Novus Public Affairs. This looks like a coordinated effort to plant AI-written attack pieces against AI industry critics while pretending to be independent journalism. OpenAI CEO Sam Altman and Global Policy Chief Chris Lehane have pushed hard on bipartisan lobbying to shape AI regulation. Now their super PAC appears to be funding fake reporters to do the same work from outside the building. The site even includes files designed for AI crawlers and language models to ingest. The content isn't just for human eyes. It's meant to be scraped, syndicated, and regurgitated until manufactured narratives become indistinguishable from real reporting.