Meta's Product Security team has published details of a system that uses generative AI-powered codemods to automatically migrate Android code away from unsafe OS APIs. Described in a March 13 post on Meta's engineering blog by Pascal Hartig and discussed on the Meta Tech Podcast with Product Security engineers Alex and Tanu, the initiative pairs two complementary approaches: building secure-by-default framework wrappers that make the safe API path the easiest path for developers, and using generative AI to retroactively migrate millions of lines of existing legacy code to those frameworks. The result is a pipeline that can propose, validate, and submit security patches across Meta's multi-app Android codebase — which serves billions of users — requiring little manual intervention from the engineers who own the affected code.
A single class of vulnerability can replicate across hundreds of call sites in a sprawling codebase, making manual remediation slow and error-prone. Automating the migration step lets Meta address entire vulnerability classes in bulk rather than routing individual patches to individual code owners. A related post from December 2025, "How AI Is Transforming the Adoption of Secure-by-Default Mobile Frameworks," provides deeper technical grounding for the same initiative, suggesting the program has been in development for at least several months. Meta also released AutoPatchBench in April 2025, a benchmark for evaluating AI-powered security fix systems, putting a public stake in the ground for how these tools should be measured.
Google and Microsoft are running similar plays. Google DeepMind's CodeMender agent, launched in October 2025, uses Gemini models in an agentic loop combining static and dynamic analysis to proactively fix vulnerabilities in open-source projects, with a critique-LLM validation layer and mandatory human researcher sign-off before any patch is submitted. GitHub Copilot Autofix, generally available since August 2024, pairs CodeQL static analysis with GPT-4o and has cut median fix time from 1.5 hours to 28 minutes; its Security Campaigns feature extends this to remediation across up to 1,000 repositories simultaneously. Meta's distinguishing design choice is the "secure-by-default framework plus AI migration" pairing — constraining the API surface first so AI-generated code has less room to introduce new problems, a structure that may partly address the concern raised by a Hacker News commenter questioning whether AI codemods can genuinely be called "secure-by-default" given AI's own potential for errors.
AI security funding jumped from $2.16 billion in 2024 to $6.34 billion in 2025, with automated remediation startups including Pixee, Endor Labs, Semgrep, and CodeAnt AI all attracting significant rounds. Across internal and commercial approaches alike, the unresolved question is the same: how much can you trust AI-generated patches without robust <a href="/news/2026-03-14-pi-autoresearch-autonomous-experiment-loop-llm-training-frontend-metrics">automated testing and human validation pipelines</a>? Meta's strategy of constraining the problem space through secure-by-default framework design before applying AI migration may be one of the more architecturally sound answers, though how deeply each AI-generated patch is validated in practice has not been fully detailed in public disclosures.