MassiveScale.AI has released the Agentic Trust Framework (ATF), an open specification applying Zero Trust security principles to autonomous AI agents. Published in February 2026 in collaboration with the Cloud Security Alliance, the v0.1.0-draft addresses what the framework's author Josh Woodruff — a CSA Research Fellow and IANS Faculty member — describes as a fundamental gap: <a href="/news/2026-03-14-agentic-systems-security-crisis">existing security standards were designed for human users and static systems, not for agents that act autonomously, hold credentials, access sensitive data, and form complex inter-agent trust relationships</a>.
The specification organizes its controls around five governance elements. Identity management requires unforgeable per-agent credentials. Behavioral monitoring uses AI-driven anomaly detection. Data governance covers input validation and output guardrails. <a href="/news/2026-03-15-34-agent-claude-code-team-openclaw-alternative">Segmentation enforces hard boundaries between systems</a>. Incident response defines kill switches and recovery procedures. These aren't abstract categories — the spec maps each to concrete implementation requirements.
A central design choice is the four-level Agent Maturity Model — Intern, Junior, Senior, and Principal — which gates expanding autonomy behind demonstrated performance rather than granting it on deployment. Progression through levels requires passing five checks: performance metrics, adversarial security testing, measurable business value, a clean incident record, and explicit governance sign-off. Agents can also be demoted, enforcing continuous verification rather than one-time certification. The maturity levels align directly with AWS's Agentic AI Security Scoping Matrix (Scopes 1–4, published November 2025), and ATF maps its controls to OWASP's Agentic Top 10 risks, NIST AI RMF, and NIST SP 800-207 for Zero Trust architecture — designed to complement rather than replace existing standards.
Early ecosystem traction gives ATF more practical grounding than most governance proposals at this stage. Microsoft's Agent Governance Toolkit — which independently covers all ten OWASP Agentic risks with over 6,100 tests across Python, TypeScript, and .NET — has formally proposed ATF alignment. Berlin AI Labs has constructed a 12-service reference implementation validating all five ATF elements with contract testing. The framework grew out of Woodruff's 2025 book "Agentic AI + Zero Trust: A Guide for Business Leaders," suggesting the primary audience is security and risk decision-makers rather than implementers. MassiveScale.AI appears to operate as a founder-led research and advisory entity rather than a funded product company, meaning ATF's long-term governance will depend on community uptake. The specification is Apache 2.0-licensed and available at agentictrustframework.ai, which also hosts a 30-question self-assessment tool and a Technical Component Catalog.