Meta has disclosed four previously unknown custom AI inference chips — the MTIA 300, 400, 450, and 500 — developed with Broadcom under its Meta Training Inference Accelerator program. The announcement lays out a generational roadmap stretching from chips already in production to mass deployments planned for 2027, and for the first time makes an explicit performance claim against Nvidia.

The four chips divide into two workload categories. The MTIA 300, already in production, handles ranking and recommendation tasks using a chiplet design with RISC-V vector cores. The MTIA 400 extends this to generative AI inference and is entering datacenter deployment in 72-device rack configurations, with devices forming single scale-up domains via a switched backplane. The MTIA 450 doubles the 400's HBM bandwidth; Meta claims it performs "much higher than that of existing leading commercial products." The MTIA 500 adds a further 50% HBM bandwidth gain over the 450 and introduces an SoC chiplet for PCIe and scale-out NIC connectivity. Both the 450 and 500 are targeted for mass deployment in 2027.

That competitive performance claim deserves scrutiny. Meta hasn't published benchmark methodology or identified the comparison chips by name. "Existing leading commercial products" presumably means the H100 or H200, but Nvidia's Blackwell hardware is now shipping. By the time the MTIA 450 reaches mass deployment in 2027, it will be competing against a successor generation, not the one it was benchmarked against. The claim may be accurate in absolute terms; whether it will still be accurate in context is a different question.

The more durable engineering story is the iteration cadence. Meta says it can ship a new chip roughly every six months — a pace made possible by the MTIA 400, 450, and 500 all sharing the same rack, chassis, and network infrastructure. New silicon drops in without rebuilding the datacenter. Broadcom has framed Meta's commitment as deploying "multiple gigawatts" of these chips — Broadcom's characterisation, notably, not Meta's — which suggests the partnership's economics are already structured around that volume. That infrastructure portability, more than any single benchmark, is what makes the cadence claim credible.