Aikido Security has documented a resurgence of the Glassworm supply chain attack campaign, with over 150 GitHub repositories, npm packages, and VS Code extensions compromised between March 3 and March 9, 2026. The attack, which Aikido first identified in March 2025, exploits invisible Unicode characters from the Private Use Area (PUA) range to conceal malicious payloads inside what visually appear to be empty JavaScript strings. At runtime, a small inline decoder extracts the hidden bytes and passes them to eval(), executing the payload without any visible trace in code review interfaces, terminals, or standard editors. Past incidents showed the decoded payloads stealing tokens, credentials, and secrets via Solana.

The March 2026 wave hit notable open-source projects including a repository from WebAssembly runtime company Wasmer, the opencode-bench repository from anomalyco (the organization behind the AI coding tool OpenCode and the SST infrastructure framework), and pedronauck/reworm, which has over 1,400 GitHub stars. The campaign has also expanded beyond GitHub, with the same technique now appearing in npm packages and VS Code marketplace extensions, consistent with Glassworm's historical pattern of pivoting between registries when individual ecosystems increase scrutiny.

Aikido's analysis assesses that attackers are <a href="/news/2026-03-15-supply-chain-attackers-use-invisible-unicode-and-suspected-llms-to-flood-github">using large language models</a> to generate contextually tailored cover commits for each targeted repository. The malicious injections arrived alongside realistic-looking documentation tweaks, version bumps, and small refactors stylistically consistent with each project's existing commit history. Aikido's reasoning: manually crafting 150-plus bespoke, project-specific code changes is not operationally feasible. If that assessment holds, LLM-generated commits mean each of the 150+ injections was stylistically customized — a first for this campaign, and a direct inversion of the same generative capability behind tools like GitHub Copilot.

For the AI agent ecosystem, the threat compounds in a specific way. Agentic developer workflows — coding agents that file pull requests, dependency-update bots, and CI/CD pipelines configured to auto-merge low-risk changes — are designed to treat stylistic coherence as a positive signal. LLM-generated commits score well on exactly the dimensions those agents use to approve changes. The actual payload, invisible at the diff layer, bypasses visual and linting review entirely. Aikido's Safe Chain tool addresses the install-time vector by wrapping common package managers to detect and block supply chain malware in real time. But any team running agentic PR workflows that auto-approve based on commit style should treat that approval heuristic as a now-known attack surface, not a safety net.