TSD Interactive has published Mozzie, an open-source desktop application for running multiple AI coding agents in parallel against a local codebase. The project is available on GitHub. Its core premise is straightforward: a developer describes what they want built, an LLM breaks the goal into pieces, and several agents work those pieces at the same time — with a human signing off on each result before it goes anywhere.
One claim in the project's own documentation deserves a closer reading. Mozzie is described as "local-first," which is accurate in the ways that matter most: your source code never leaves your machine, all application state lives in a SQLite database on-device, and API keys are stored in the OS keychain rather than on a vendor's server. But the LLM calls that decompose tasks and guide agents go to whichever cloud API the developer selects — OpenAI, Anthropic, or Gemini. The orchestration is cloud-assisted; the code handling is not. That distinction matters for developers evaluating this against SaaS-based alternatives.
The task decomposition works through a dependency graph with cycle detection. Each work item gets its own git worktree and branch before an agent — Claude Code, Gemini CLI, Codex CLI, or a user-defined script — starts on it. Sub-tasks use stacked branches that collapse into a single parent pull request when the work is done. This sidesteps the merge conflict problem that tends to appear when multiple agents write to the same working tree without coordination.
The review mechanism is the part most likely to shape whether developers stick with it. Completed work items enter a queue; the developer inspects the diff, then approves, rejects, or sends a feedback note. Rejections aren't discarded — the reason and the full attempt history get prepended to the agent's next prompt on retry. Whether that feedback injection actually improves subsequent attempts will depend heavily on the task type and the model, but it at least gives the agent something to work with rather than starting cold.
Mozzie enters a category that has grown crowded over the past year, with several projects — some SaaS, some self-hosted — attempting to solve the same coordination problem. Its differentiator is the combination of human-gated approvals and the fact that no third party touches your repository. For developers at companies with IP sensitivities, or anyone who has watched an autonomous agent push broken code to a branch at 2am, the mandatory review step may be feature enough.
The open question is throughput. If a large task decomposes into many parallel work items that all resolve around the same time, the developer faces a stack of diffs to clear. Mozzie's value proposition — parallelized agents, serial human review — works best when the agent queue and the human review cadence stay roughly in sync. How it degrades when they don't is worth testing before committing to it on anything time-sensitive.