Most AI coding agents hit a wall the second a terminal program asks for input. They can run shell commands fine, but interactive tools like vim, psql, or npm create? Dead end. tui-use, a new open-source tool from GitHub user onesuper, fixes that by spawning programs in a pseudo-terminal, taking text snapshots of the screen, and letting agents send keystrokes. Think BrowserUse, but for the terminal.

The tool runs a background daemon that uses a headless xterm emulator to handle ANSI escape sequences and cursor movement. Agents get clean plain-text snapshots of what's on screen, plus metadata like which menu item is highlighted. It integrates directly with Claude Code via a plugin and works with Cursor, OpenCode, Codex, and Gemini CLI. Install from npm and you're off.

Reaction on Hacker News split two ways. Some developers questioned whether tui-use reinvents tmux, which has managed terminal sessions for years. Others pushed back, arguing that current LLMs genuinely can't handle TUI interfaces, and tools like this fill a real gap. The BrowserUse comparison is telling: browser automation for agents took off because it solved an accessibility problem that raw HTTP requests couldn't.

tui-use is a practical adapter. It lets existing agents work with programs built for humans, without new training or paradigms. For now, that's enough. The terminal moves slower than the web, and adapters like this might cover the gap until agents can handle any interface natively.