Angela Lipps was watching her grandchildren at her Tennessee home on a July afternoon when US Marshals arrived with weapons drawn. She was 50 years old, had never been to North Dakota, and had no idea what was happening. A facial recognition algorithm had flagged her as a woman who used a fake military ID to steal tens of thousands of dollars from Fargo-area banks, and that was enough.
She spent 108 days in a county jail without bail before anyone checked the bank records — records that showed, in short order, that Lipps was more than 1,200 miles away every single time the crimes took place. By the time her attorney, Jay Greenwood, finally obtained them and secured her release on Christmas Eve 2025, she had lost her house, her car, and her dog.
Fargo police declined to pay for her trip home.
Local defence attorneys and a non-profit called The F5 Project eventually stepped in, but the broader picture is grim. Detectives hadn't simply relied on the algorithm's match — they'd doubled down on it, citing similarities in facial features, body type, and hairstyle in court documents as though that amounted to independent corroboration. It didn't. No detective contacted Lipps before the arrest. No court imposed bail. The algorithm's output, it appears, was treated as a substitute for an investigation.
The case draws obvious comparisons to those of Robert Williams, Nijeer Parks, and Porcha Woodruff — a pattern of wrongful arrests driven by face recognition that falls disproportionately on people of colour and those without the resources to mount an immediate legal challenge. What sets the Lipps case apart is how long the error persisted uncorrected, and how many institutional checkpoints simply didn't function.
There is still no federal law governing how police may use facial recognition outputs as probable cause. The proposed Facial Recognition and Biometric Technology Moratorium Act has stalled in Congress repeatedly. Without it, departments set their own standards — or none at all.
For the AI agent industry, this lands as more than a cautionary tale about a separate sector. The commercial case for agentic AI — systems that don't just advise but act — rests on a confidence in AI inference that the Lipps case directly undermines. When a false positive costs someone six months of their life and everything they own, the argument for mandatory human verification before any irreversible action becomes very simple: someone needs to check the bank records first. That principle doesn't get easier to argue around as the stakes get higher.