A developer known as jrswab has released Axe, a 12MB Go binary that lets users define and run LLM-powered agents through TOML configuration files, composing them via Unix pipes, cron jobs, and git hooks rather than through a proprietary orchestration runtime. The tool supports Anthropic Claude, OpenAI, and Ollama backends, with all LLM API calls made directly through Go's standard library — no provider SDKs included. With only four direct dependencies (cobra, toml, mcp-go-sdk, and x/net), Axe is built as an explicit counterpoint to the abstraction-heavy frameworks that have drawn sustained criticism from the developer community since mid-2023. Key features include sub-agent delegation with depth limiting and parallel execution, persistent memory implemented as timestamped markdown logs with LLM-assisted garbage collection, MCP tool integration over SSE and streamable-HTTP transports, and sandboxed file and shell operations scoped to each agent's working directory.
Axe enters a well-documented counter-movement against heavyweight AI orchestration frameworks, most visibly LangChain. By late 2024, that backlash had a body count: Octomind's June 2024 writeup described a year-long LangChain dependency they eventually cut entirely, and CrewAI founder João Moura publicly announced the complete removal of LangChain from his framework in December 2024. Anthropic added institutional weight the same month with its "Building Effective Agents" post, which explicitly advises developers to start with direct LLM API calls and warns that frameworks "often create extra layers of abstraction that can obscure the underlying prompts and responses." Axe's design — TOML configs, dry-run mode, JSON output with metadata, zero provider SDKs — maps directly onto the failure modes those teams documented.
Reception on Hacker News, where the project appeared as a Show HN submission, was broadly positive toward the Unix-philosophy framing but surfaced several practical concerns. Commenters pushed back on the "persistent memory" feature label, arguing it obscures implementation details that users need to reason about operationally. A separate concern centered on cost unpredictability: while keeping individual agent contexts small is the stated design goal, fanning out many sub-agents in parallel could produce aggregate API costs exceeding a <a href="/news/2026-03-14-1m-token-context-window-generally-available-claude-opus-4-6-sonnet-4-6">single large context window</a> — a real risk in agentic pipelines without explicit spend controls. Questions around data consistency when multiple agents interact with shared files across concurrent runs also went unanswered in current documentation. These are gaps the project will need to address as it matures.
Axe sits alongside other minimal agent tools — the Rust-based aichat and Simon Willison's llm CLI among them — but occupies different ground. Its TOML agent definitions are version-controllable and diff cleanly in standard git workflows. Native MCP support gives it interoperability with the broader Anthropic tool ecosystem. And jrswab describes Axe explicitly as "the executor, not the scheduler" — scheduling stays out of scope entirely, left to cron and CI, which keeps the binary focused and the dependency count low. The project is at github.com/jrswab/axe and requires Go 1.24 or later to build from source.