Someone forgot to check their source code. An investigation by Model Republic's Tyler Johnston revealed that AcutusWire.com, a news site that cranked out 94 articles in under four months, is entirely run by AI bots. The tipoff came when Nathan Calvin, vice president of the advocacy group Encode, got an email from a reporter named Michael Chen asking for comment on an AI bill. Chen doesn't exist. The email was AI-generated, and so is every article on the site. A content detector from Pangram flagged 69% of AcutusWire's articles as fully AI-generated, with another 28% marked as partially AI-generated.

The site's operators left their editorial pipeline visible in the JavaScript code sent to every visitor's browser. The interface includes fields for "AI Background Context" and "Question Prompts" for AI interviewers, plus buttons labeled "Generate Story Draft" and "Regenerate." A multi-pass AI editorial review scores articles on AP style compliance, quote accuracy, and source verification. The whole review process takes a median of 44 seconds per article. On 42 of the 94 stories, the automated reviewer itself flagged the piece as "needs_revision." They were published anyway.

The investigation traces AcutusWire's funding to Leading The Future, a $125 million super PAC backed by OpenAI president Greg Brockman and venture firm a16z. Recent internal dysfunction at Microsoft and OpenAI highlights the fragility of these partnerships. The site publishes articles attacking AI industry critics and pushing anti-regulation talking points. It also appears to have connections to Republican PR firm Novus Public Affairs.

AcutusWire licenses its content under Creative Commons as a wire service, making these AI-generated articles available for other publishers to republish. But the U.S. Copyright Office has stated that AI-generated works without human authorship can't receive copyright protection. That means anyone can grab those articles without attribution.

The site was built to be consumed by AI systems too. Its robots.txt file grants specific access to OpenAI's GPTBot, Anthropic's ClaudeBot, and Google's crawlers. There's even a deprecated ChatGPT plugin file and an experimental llms.txt file describing the site as "independent journalism." It's not. It's an AI content farm with political backing, pumping out advocacy pieces while pretending to be a news outlet. Federal disclosure requirements haven't caught up to this model yet, though 19 states have enacted laws requiring disclosure of AI-generated content in political communications.