Developer Jaron Swab has released Axe, a 12MB Go binary that applies Unix philosophy to LLM agent orchestration. Rather than building another chat-centric framework, Axe treats each AI agent as a small, focused program defined in a TOML file and composed externally via stdin pipes, cron jobs, git hooks, or CI pipelines. The tool supports Anthropic (Claude), OpenAI (GPT models), and Ollama for local inference, and ships with just four direct dependencies — cobra, toml, mcp-go-sdk, and x/net — a deliberate contrast to the dependency-heavy frameworks that have come to dominate AI tooling. Swab has been direct about his motivation: "I built Axe because I got tired of every AI tool trying to be a chatbot. Most frameworks want a long-lived session with a massive context window doing everything at once. That's expensive, slow, and fragile."
Axe's feature set is more capable than its minimal footprint suggests. Agents can delegate work to sub-agents with configurable depth limiting and parallel execution, and can draw on a SKILL.md-based reusable instruction system that Swab first prototyped in an earlier project called slipbot. Persistent memory is implemented as timestamped markdown logs with LLM-assisted garbage collection — a deliberately simple mechanism that keeps state manageable without introducing a database dependency. The tool also integrates with the Model Context Protocol (MCP) via SSE or streamable-HTTP transport, giving Axe agents access to the growing ecosystem of MCP servers for external data and tooling. Docker support includes multi-architecture builds with a hardened, non-root container configuration.
The Hacker News thread drew named critics. Commenter bensyverson pointed to cost control as the core risk in fan-out sub-agent architectures: while Axe's single-purpose, small-context design is inherently cheaper per run than monolithic sessions, triggering ten parallel sub-agents can still produce unexpected API bills, a risk that <a href="/news/2026-03-14-context-gateway-llm-compression-proxy">tools like Context Gateway</a> are designed to mitigate. Yet Axe currently leaves budget governance entirely to the operator — consistent with its Unix-tool philosophy of providing primitives without guardrails. A second commenter, athrowaway3z, pushed back on the "persistent memory" framing, arguing that the term invites scope-creep expectations and that the implementation's simplicity deserved more upfront documentation. A third raised concurrency questions about multi-agent file consistency when agents share working directories.
Axe is available now as open source at github.com/jrswab/axe, installable via go install with Go 1.24 or later. Teams already running Unix-native workflows can drop in LLM automation without adopting a full framework — Ollama support makes it especially useful for operators who need inference on-premise. The cost-governance and memory-semantics criticisms are legitimate operational concerns, but they apply to any composable tool that can fan out parallel processes; the Unix philosophy has always meant the operator holds the guardrails. Swab is already running Axe in production automation flows — the most honest endorsement available.