Security researcher halvar.flake has a straightforward answer for anyone worried about AI coding agents running amok on their machine: don't let them on your machine. In a recent blog post, he outlines a development workflow built around remote servers, SSH sessions, and tmux that keeps agents contained in disposable VMs. You SSH into a rented server with GitHub key forwarding, attach to a persistent tmux session, and let Claude Code or similar agents churn away while you're disconnected. Secrets stay off the box entirely. halvar.flake knows this pattern well. SSH into a remote box, attach to screen. It was standard practice for people who didn't want incriminating data on machines they physically owned, as he dryly notes. The one gap now is GitHub key forwarding. A compromised agent could push malicious code to your main repository. His fix: fork your repo, develop against the fork, and issue cross-repository pull requests when you're done. A human reviews the PR before it merges, which is what many open-source projects already require. Hacker News commenters flagged DevContainers as a more polished alternative. Some argued AI agents should get scoped permissions like untrusted coworkers rather than shared credentials. Fair points. But the pragmatic pitch is simple. Worst case in a supply-chain attack, you lose your API credentials or production secrets.