Infisical published a practical security guide in March 2026 addressing credential exposure risks specific to Cursor Cloud Agents, the autonomous coding agents that spin up isolated Ubuntu VMs when triggered from Slack, GitHub, or Linear. Infisical identified three recurring vulnerability patterns: secrets baked into VM disk snapshots (such as npm auth tokens frozen into images during install steps), sensitive values hardcoded in the committed .cursor/environment.json configuration file, and long-lived static credentials stored in Cursor's built-in Secrets UI with no rotation, audit trail, or access isolation between team members or environments.

Infisical's proposed solution is a two-layer credential architecture designed to keep real secrets out of Cursor's storage entirely. The approach stores only Infisical machine identity credentials — a client ID and client secret — in Cursor's Secrets UI, then uses those at agent boot time to dynamically fetch all other secrets from Infisical via either the infisical run command, which injects secrets as in-process environment variables without touching disk, or infisical export, which writes secrets to files for tools that require file-based configuration. Secrets are fetched fresh on every VM boot, never persisted into snapshots, and every access is logged in Infisical's audit trail. The guide also recommends scoping machine identities per environment — separate identities for dev, production, and CI — to limit blast radius in the event of a prompt injection attack or compromised agent run.

The post drew substantive discussion on Hacker News. One commenter pointed to the nono project, which takes an even more aggressive approach: sandboxed agents receive only a session-scoped dummy token, with the actual API credential held by a local reverse proxy that injects it into outbound HTTP requests, meaning the agent never sees the real key at all. Another commenter challenged Infisical's framing, arguing that the attack surface is not merely a product of careless configuration but is inherently created the moment you delegate autonomous capability to an agent — dynamic secret fetching reduces exposure windows but cannot eliminate the fundamental risk. The open question that neither approach has answered is what happens when prompt injection manipulates the agent into calling infisical export itself: whether Cursor's roadmap includes sandboxed credential scopes at the tool-call level, or whether that gap falls to third-party proxies like nono to fill.