Running multiple AI agents in parallel turns the human into the bottleneck. Every concurrent Claude Code session wants tool-call approvals, reports failures, and asks for decisions — often simultaneously. Aperture Core, released March 14 by pseudonymous developer tomismeta, treats this as a scheduling problem.

Published to npm as @tomismeta/aperture-core, the open-source TypeScript engine sits between multiple agent event streams and a single human decision surface. Rather than routing each decision through another LLM call, it uses three deterministic layers — hard policy, adaptive utility, and queue planning — to sort events into what needs human attention now, what can wait, and what can stay ambient.

The primary target is Claude Code running <a href="/news/2026-03-14-modulus-parallel-ai-coding-agents">multiple parallel agents</a>. Events flow through an @aperture/claude-code adapter into a shared runtime and surface in a terminal UI. Human operators configure interrupt behavior through a JUDGMENT.md file at .aperture/JUDGMENT.md, which controls named approval categories — lowRiskRead, fileWrite, destructiveBash — each with auto-approve thresholds, interrupt rules, and context-expansion requirements. A local MEMORY.md file persists behavioral signals: response latency, deferral patterns, disagreements. The engine uses those signals to sharpen its judgment over time without incurring inference costs in the critical path.

The design is explicit about its OS lineage. The TUI's ACTIVE NOW, QUEUE, and AMBIENT states map directly to running, ready, and suspended process states in a classic process scheduler. Attention pressure forecasting suppresses lower-value interrupts before cognitive overload sets in — the same load-shedding logic used in real-time systems. LLM calls are kept out of the hot path by design: determinism and bounded latency are non-negotiable for the scheduler itself. Storing all judgment state in human-readable markdown files rather than a database is tomismeta's own stated choice — configuration should stay legible and editable by the person it serves.

The monorepo includes adapter stubs for @aperture/codex and @aperture/paperclip. The core SDK is also embeddable via npm, exposing an event-in, frame-out loop for developers who want Aperture's attention allocation inside their own tooling. tomismeta's prior work includes a continuity plugin for OpenClaw, Peter Steinberger's open-source AI assistant, which used hash-chained session logs and lifecycle hooks for agent session resumption — a narrow problem that maps directly onto what Aperture now addresses at the architectural level.

What Aperture argues, implicitly, is that the human-in-the-loop problem is not primarily a model problem. It's a scheduling and UI problem — one that already has decades of solved theory behind it. If that framing catches on, expect more agent tooling to start looking like operating systems. The project is at github.com/tomismeta/aperture.