A Show HN post this week introduced Drift-guard, an open-source tool built to catch visual regressions that AI coding agents quietly introduce into UI codebases. The tool works by comparing rendered components and design tokens against stored baselines, flagging deviations in CSS properties, spacing, typography, or layout before a pull request merges.

The problem it targets is specific: <a href="/news/2026-03-14-codelegate-keyboard-driven-agent-orchestrator-tui-for-mac-linux">AI agents like Cursor or GitHub Copilot's autonomous PR mode</a> optimize for functional outcomes. When fixing a broken layout or bumping color contrast to meet accessibility requirements, they can silently alter values — a padding unit here, a font-weight override there — that individually look harmless but compound into visible design drift over weeks of autonomous commits. Standard test suites don't catch this. A broken button state will fail a test; a button that's two pixels narrower and a shade lighter won't.

Drift-guard sits in CI, running visual and token-level diffs against the last approved baseline. When a diff exceeds a configurable threshold, it blocks the merge and surfaces what changed. The approach is closer to screenshot regression testing tools like Percy or Chromatic than to static analysis — it cares about what the UI looks like, not just what the code says.

The project appears to be a solo effort and is early-stage. There's no documentation yet on how baselines are versioned when intentional design updates are made, which is the hard problem for any visual regression tool: distinguishing drift from deliberate change. That question will determine whether Drift-guard scales beyond small codebases or stays a developer convenience script.