VibePod, an open-source CLI tool developed by Austrian developer Harald Nezbeda, has launched a unified interface for running multiple AI coding agents inside isolated Docker containers. Available via pip install and invoked with the alias vp, the tool currently supports seven agents — Claude (Anthropic), Gemini (Google), Codex and OpenCode (OpenAI), Devstral (Mistral), Auggie (Augment Code), and GitHub Copilot — each executing in its own container to prevent credential bleed between sessions and eliminate the need for global package installs on the host machine. The project is available at github.com/VibePod/vibepod-cli.
The technical centerpiece is a built-in HTTP proxy layer, vibepod-proxy, which uses mitmproxy to intercept and log all outbound agent traffic to a local SQLite database. A Datasette-powered <a href="/news/2026-03-14-rudel-open-source-analytics-dashboard-for-claude-code-sessions">analytics dashboard</a>, accessible at localhost:8001, visualizes per-agent HTTP traffic, token usage metrics for Claude, and side-by-side agent benchmarking. All data remains on the user's machine with no telemetry sent to the cloud — a design choice Nezbeda frames explicitly as privacy-first. This architecture mirrors the pattern he had already established across six individually containerized agent repos under his personal GitHub namespace, nezhar, starting with claude-container in August 2025.
VibePod is, in effect, an orchestration layer built atop Nezbeda's own prior work. Over a six-month period he iteratively containerized Claude, Gemini, Devstral, OpenCode, Auggie, and Copilot as standalone repos — each featuring the same Docker isolation, mitmproxy logging, and Datasette UI — before wrapping them in the unified vibepod-cli under a new GitHub organization. The nezhar/claude-container repo, the archetype for the entire ecosystem, has accumulated 147 GitHub stars since its August 2025 launch, while vibepod-cli currently sits at just 8 stars. That gap suggests the developer community has been relying on the individual containers for months without realizing a unified interface now exists on top of them.
For teams evaluating AI coding assistants, VibePod addresses a concrete operational gap: running multiple agents safely in shared or CI environments while capturing objective, vendor-neutral usage data. The ability to <a href="/news/2026-03-14-iris-open-source-mcp-native-eval-observability-tool-for-ai-agents">benchmark</a> Claude, Gemini, Codex, and others side-by-side through a single local dashboard — without relying on each vendor's own reporting — gives developers an independent view of API traffic and token costs. The project's biggest practical hurdles are credential configuration across seven agents and whether the container images will track upstream CLI releases reliably enough for teams to depend on in production. Given the 147-star head start of the underlying containers, the audience is already there — Nezbeda just needs to get them to upgrade to the unified tool.