Last month, a startup called Voltropy published a GitHub repository with an unusual pitch: a programming language designed primarily for AI agents to read, write, and load at runtime — not for human developers to maintain. The language is called Mog. Its timing is pointed. Agentic systems capable of writing and executing their own code are proliferating faster than the security tooling around them, and the improvised solutions most teams reach for have known holes.
The problem has a name in security circles: ambient authority. Hand an AI agent a Python interpreter or a bash shell and that agent can, in principle, reach for anything the host process is permitted to touch. Sandboxing via containers or chroot jails addresses this at the infrastructure level, but it's coarse-grained, easy to misconfigure, and invisible to the agent's own reasoning about what it's doing. A compromised agent can often escalate through the scripts it writes, because those scripts run with the same ambient permissions as the agent itself.
Mog's answer is to bake the permission model into the language. Every side-effecting operation — filesystem reads and writes, network calls, spawning subprocesses — must be explicitly granted by the host at the moment a Mog module is loaded. The grants are also transitive: code an agent writes and loads at runtime can only inherit the capabilities the host originally delegated. An agent cannot mint new permissions for itself through the programs it generates.
```mog // Host grants only fs::read on /tmp — no network capability delegated plugin analyze(input: []byte) -> Result<Report, Error> { let raw = fs::read("/tmp/data.json")?; // OK: capability granted let resp = net::post("https://example.com", raw)?; // runtime error: capability not granted Report::parse(resp) } ```
Whether that model holds up under adversarial conditions is harder to assess from documentation alone. Voltropy's materials reference an intermediate-representation backend called rqbe — described as a safe-Rust rewrite of the QBE compiler backend — but a standalone rqbe repository was not locatable at press time, and Voltropy did not respond to a request for comment before publication. The performance claims are plausible given the compilation approach: native code via a Rust-based toolchain, shared-library loading directly into the host binary with no IPC or process-startup overhead. Independent benchmarks don't yet exist.
What's more immediately legible is the language surface itself. Mog's full specification runs to roughly 3,200 tokens — small enough to sit inside a single context window — and deliberately borrows syntax from Rust, Go, and TypeScript. Voltropy's bet is that LLMs already have strong statistical priors for those languages and can generate correct Mog without fine-tuning. The toolchain ships a dedicated LLM context document and a 755-line example file alongside the compiler, which suggests the team is thinking seriously about the generation side of the problem, not just the execution side.
The exclusion list is as telling as the inclusion list. No raw pointers, no threads, no macros, no generics, no POSIX syscalls. This puts Mog in an interesting position against other safe-execution approaches. WebAssembly runtimes like Wasmtime offer comparable sandboxing properties with a much larger existing tooling ecosystem. Managed sandbox services like E2B handle isolation at the infrastructure layer entirely, requiring no language buy-in. Mog's counter-argument is that a constrained language is a more auditable surface than either — but it requires teams to add a new language to their agent toolchain, which is a real adoption cost that shouldn't be glossed over.
The project is MIT-licensed and the repository is actively soliciting contributions. If the capability-propagation guarantee holds up to scrutiny — ideally a formal security audit rather than just design intent — it represents the kind of primitive that could underpin a more principled approach to agentic security at scale. Whether that scrutiny arrives before, or after, a high-profile incident forces the conversation is less certain.