Axe is a Go CLI tool for running LLM-powered agents from TOML files. The whole thing ships as a 12MB binary with two dependencies. GitHub user jrswab published it earlier this month.

The pitch is straightforward: you don't need a framework. Define your agent in a TOML file, pipe input to it, get output back. `git diff | axe run pr-reviewer` is the canonical example. No daemon, no server, no GUI. Agents slot into cron jobs, git hooks, and CI pipelines the same way any Unix utility would. Configuration files live in version control alongside your code — diffs, reviews, rollback, the same workflow you'd use for any infrastructure config.

Axe supports Anthropic, OpenAI, and Ollama, so the same agent definition runs against Claude, GPT-4, or a local model. That flexibility matters for teams managing cost and data sovereignty tradeoffs across different workflows — it's one of the few agent runtimes covering the full range from hosted APIs to air-gapped deployments.

The feature set is more substantial than the binary size implies. Sub-agents work as LLM tool calls with configurable recursion depth and optional parallel execution. Memory persists as timestamped markdown logs; when context grows unwieldy, the LLM itself trims it — no vector database required. Built-in tools for file operations and shell execution are scoped to a working directory, keeping agents contained without the overhead of full containerization. A skill system lets you define reusable instruction sets and share them across multiple agents.

The project is clearly aimed at engineers comfortable at the command line, and it's early. Docker support covers multi-architecture builds, and example agents for code review and commit message generation lower the initial setup cost. Whether Axe finds an audience beyond CLI-native developers will depend largely on whether TOML-defined agents feel like a natural fit outside backend engineering workflows — but as a piece of tooling, the architecture is unusually disciplined for something this new.