Niantic Spatial has announced a partnership with Coco Robotics to deploy its Visual Positioning System (VPS) for navigating sidewalk delivery robots, with the system trained on more than 30 billion images crowdsourced from Pokémon Go players over nearly a decade. Rather than relying on GPS — which degrades significantly in dense urban canyons where tall buildings interfere with satellite signals — the VPS identifies a robot's precise location by recognizing nearby buildings and landmarks, achieving centimeter-level positioning accuracy. Niantic CEO John Hanke drew a direct line between the company's augmented reality roots and its new robotics ambitions, telling MIT Technology Review: "It turns out that getting Pikachu to realistically run around and getting Coco's robot to safely and accurately move through the world is actually the same problem."

The engineering underpinning the VPS is essentially a planetary-scale photogrammetry pipeline. Technical observers on Hacker News noted that Niantic is running a modern variant of COLMAP, the well-known Structure-from-Motion framework, to convert image streams into aligned 3D point clouds — the same core technique used in academic and industrial computer vision, but deployed at a scale that demands significant infrastructure to match features against billions of stored reference points in near-real-time. Niantic's data collection accelerated in 2020 when Pokémon Go introduced an in-game AR scanning mechanic, incentivizing its then-hundreds-of-millions of players to scan real-world statues and PokéStops with their cameras in exchange for in-game rewards. The game still retains an estimated 50 million active users, keeping the data pipeline alive.

The partnership also establishes a feedback loop that mirrors strategies used by Waymo and Tesla in the autonomous vehicle space, and echoes Mapillary's crowdsourced street-level mapping work (now owned by Meta). Once Coco's VPS-equipped robots are deployed, they will continuously feed new street-level imagery back into Niantic's model — an arrangement Niantic describes as part of a longer-term effort to build a "living map" of the world. Technical commenters have flagged that data freshness, not data volume, is the harder unsolved problem: storefronts repaint, buildings are renovated, and advertising changes, all of which can cause point clouds to go stale and degrade localization accuracy. Keeping the robots' imagery flowing back into the system is what makes the whole thing work.

Coverage highlighting that <a href="/news/2026-03-16-pokemon-go-players-niantic-30-billion-image-ai-dataset">players trained robots "unknowingly"</a> has drawn pushback from commenters who note that Pokémon Go did disclose that landmark scans were used to build 3D models of points of interest — though the downstream application of guiding delivery robots was never telegraphed. The harder issue: the 3D world map Niantic has built is privately held and licensed commercially, prompting comparisons to OpenStreetMap and calls for the underlying spatial data to be made publicly accessible. With law enforcement interest in precise landmark-based geolocation also a plausible future use case, how Niantic governs access to its city-scale map will matter as much as the technology itself.