A developer at Containarium disclosed a security incident that exposes a real gap in AI-assisted development: an AI coding agent selected a dependency version carrying a known CVE, and a cryptominer ended up running on the platform.

The generated code passed all functional tests. That's the point. The failure wasn't a bug in the traditional sense — it was a version choice, made silently, with no PR comment, no audit trail, no explanation. The "why this version?" question that a senior developer would ask during code review was never asked.

Containarium's author, posting as hsin003 on Hacker News, identified visibility as the root problem. When an AI agent scaffolds a project, every pinned version is an implicit architectural decision that looks like boilerplate until it isn't. Nothing in the diff flagged the version selection as a deliberate choice worth scrutinizing — because the agent doesn't flag choices. It just makes them.

The thread produced two practical responses. Run npm audit before validating any AI-generated functionality, not after. And treat AI-generated commits the way you'd treat a pull request from an <a href="/news/2026-03-14-redox-os-adopts-no-llm-contribution-policy-amid-growing-oss-ai-governance-debate">unknown external contributor</a>: assume nothing about the provenance of included packages.

The standard CI pipeline never triggered. Linters, unit tests, and integration checks weren't designed to interrogate why a dependency landed at a specific version. Catching this class of vulnerability requires explicit additions: software composition analysis, SAST scans, or mandatory lockfile audits baked into the pipeline. Containarium has since implemented centralized penetration testing and automated vulnerability scanning across all applications on its platform.

The specific CVE involved has not been publicly identified in Containarium's disclosure.