Where are my tokens actually going? That's the question CodeBurn wants to answer.

The new open-source CLI tool from AgentSeal reads session data straight from local disk for six AI coding assistants, including Claude Code, Cursor, GitHub Copilot, and OpenAI's Codex. It breaks usage down across 13 task categories like coding, debugging, testing, and refactoring.

You can see which tasks the AI nails on the first try. You can also see where it burns through tokens on edit-test-fix loops.

CodeBurn parses JSONL session files and SQLite databases directly from disk. No API keys or network middleware required. Pricing data comes from LiteLLM, cached locally. The terminal UI is built with Ink, the same React-for-terminals framework Claude Code uses, complete with gradient charts and keyboard navigation. There's even a macOS menu bar widget via SwiftBar that shows today's cost at a glance.

But there are real limitations. Cursor's "Auto" mode doesn't expose which model it actually used, so CodeBurn estimates costs using Sonnet pricing and labels them accordingly. GitHub Copilot only logs output tokens in its session state, making cost tracking incomplete. And one Hacker News commenter noted it doesn't work with Cursor's Agent mode specifically. These aren't minor caveats if you're doing serious cost analysis.

The plugin system means adding support for a new tool takes a single file. But local-only parsing carries a risk: as these tools change their storage formats, CodeBurn has to keep up. That HN complaint about Cursor Agent mode? That's exactly the kind of breakage that could become a pattern.