Daniel Stenberg calls it being "DDoS'd by slop." The curl maintainer says roughly 20% of security submissions to the project's HackerOne bug bounty program are now AI-generated garbage, while genuine vulnerabilities account for around 5% of incoming reports. AgentWars has not independently verified these figures against HackerOne platform data — they come from Stenberg's public statements and should be treated as his characterization pending confirmation. The program has paid out over $90,000 across 81 legitimate disclosures since launching in 2019, again per Stenberg; those numbers also need independent sourcing. A GitHub Gist maintained by the curl team, last updated in March 2026, catalogs dozens of fabricated reports: claimed buffer overflows in WebSocket handling, nonexistent hard-coded private keys in the source repository.

The burden falls entirely on maintainers. Curl's security team of roughly seven members — a figure Stenberg has cited in public posts — typically draws in three to four people per report. Each spends between 30 minutes and several hours checking claims that require genuine expertise to debunk. AI-generated reports mimic legitimate security writing well enough to demand serious attention. Most team members are part-time contributors with only a few hours a week available. Stenberg has said the cumulative toll is pushing internal discussion toward scrapping the bounty program's monetary reward component. RedMonk analyst Kate Holterhoff interviewed Stenberg for a piece reportedly titled "AI Slopageddon and the OSS Maintainers," dated February 2026 — AgentWars is working to verify the publication and its details before this article goes final.

HackerOne's incentive structure works against a fix. The platform earns revenue through subscription fees and a cut of bounty payouts; its pitch to enterprise clients rests on volume metrics: registered hackers, total submissions, total bounties paid. A flood of AI-generated reports inflates every one of those numbers even as it destroys the programs generating them. The platform's Reputation system, designed to penalize bad-faith submitters, loses its bite when actors can cycle fresh accounts at near-zero cost. Mitigations that would actually work — submission fees refunded on valid reports, mandatory identity verification, server-side AI detection — would require HackerOne to publicly acknowledge platform degradation. That sits awkwardly with how the company sells itself. None have been implemented.

The cost asymmetry drives the whole problem. When curl's security team spends hours debunking a hallucinated CVE, HackerOne absorbs none of that cost — no platform employee touches the report. If the program shuts down from maintainer exhaustion, HackerOne loses one small account and retains plausible deniability. Across the open-source ecosystem, projects running active bug bounties face the same math. LLM proliferation didn't invent this misalignment, but it sharpened it: AI-generated reports are harder to <a href="/news/2026-03-14-anti-slop-github-action-with-31-rules-to-auto-close-ai-generated-low-quality-prs">filter automatically</a> than traditional spam, more expensive to evaluate because they mimic legitimate security writing, and produced at scale by tools that HackerOne has no contractual obligation to police. Stenberg told Holterhoff he has raised these structural issues with HackerOne directly. The program is still running, and the reports keep coming.