When Catherine Nakalembe needed an AI model to identify maize, beans, and cassava growing in western Kenya, she discovered that no existing system had ever learned to recognize them. The Africa program director at NASA Harvest spent two weeks dispatching volunteers across farms with GoPro cameras, collecting more than 5 million images from scratch — not because the technology didn't exist, but because it had been built, trained, and tested on European and North American crops.

That improvised data sprint encapsulates a problem now playing out across the Global South. AI models built by Google, Microsoft, Amazon, and their peers are routinely breaking down in the agricultural and environmental contexts where billions of people farm for a living. In Maharashtra, India, conservation startup Farmers for Forests tested a popular open-source model on drone footage of local tree cover. It missed more than half the trees. The model had been trained on North American forests; the species it knew simply weren't there. Researchers had to manually annotate 55,000 individual trees across 80 land parcels before they could build something that worked.

The failures aren't random. They follow the shape of the training data — which is to say, they follow money and geography. Western-built AI agents carry embedded assumptions that agricultural communities across Africa, South Asia, and Latin America don't fit: reliable internet, textual literacy, formal agronomic vocabulary, centralized decision-making. Rural smallholders rarely have any of these.

Digital Green's FarmerChat, which now reaches more than a million farmers across South Asia and Africa, took a deliberately different approach. Rather than deploying a general-purpose model, the organization trained small language models on 120,000 real queries from farmers — written in the informal, code-switched, vernacular way people actually ask questions — across 16 local languages. CEO Rikin Gandhi has little patience for the alternative. "If AI assumes literacy, connectivity, or decision authority," he says, "it only benefits better-resourced farmers first and widens inequality."

That warning looks increasingly pointed against the backdrop of what's commercially at stake. The global agri-tech market is forecast to nearly triple, to $84 billion, by 2034. Google, Microsoft, Amazon, IBM, and Alibaba all have active AI agriculture programs. Critics — including the International Panel of Experts on Sustainable Food Systems — argue that profit-driven AI is already narrowing agricultural focus to a handful of globally traded commodity crops, at the expense of local food systems. Some researchers go further, raising concerns about a new form of digital colonialism: big tech firms deploying agents in the Global South primarily to extract proprietary training data, then selling it back to the same communities as paid services.

The projects that are actually working tend to share an unglamorous feature: they were built locally, for local conditions, by people who understood both. Farmers for Forests didn't adapt an off-the-shelf model — they built a custom computer vision system on Meta's Detectron2, trained entirely on their own drone imagery, to map tree canopy and calculate carbon sequestration for Maharashtra farmers. In coastal Brazil, AI-generated voice alerts reach farmers via WhatsApp, sidestepping both literacy and connectivity barriers entirely.

None of these solutions are cheap or fast. That's partly the point. Deploying a Western AI agent into a Kenyan farm and expecting it to perform is not a failure of technical imagination — it's a policy choice, reflecting whose problems the technology is actually being built to solve.