Someone finally mapped the messy world of AI memory systems. The AI Knowledge Retrieval, Memory & RAG Systems Catalog, published on GitHub by machinarii, catalogs over 100 projects that help AI store and recall knowledge. It's organized bottom-up: vector databases at the base, then RAG frameworks, graph-based retrieval, and finally high-level cognition layers like memory management and offline consolidation. Every AI knowledge system solves a problem that biological memory already solved, just with different tradeoffs.
What makes this useful is the hardware compatibility breakdown. If you're building local-first AI infrastructure, knowing whether a tool runs on Apple's native Metal API versus PyTorch's Metal Performance Shaders isn't academic. It determines what actually works on your machine. Metal means native GPU shaders used by tools like Ollama and llama.cpp (fast, no PyTorch overhead). MPS means PyTorch routing through Metal Performance Shaders (convenient but slower). That matters when you're deciding between running FAISS with CUDA on NVIDIA or squeezing performance from Apple Silicon.
That GPU compatibility data alone makes the catalog worth bookmarking.
The biological memory mapping is where it gets interesting. Vector databases handle associative recall like how you remember someone's name from their face. Knowledge graphs handle relational reasoning, connecting dots across documents. "Artificial dreaming", the offline consolidation phase, compresses raw interaction logs into persistent semantic vectors during idle compute time. Frameworks like Letta (formerly MemGPT) and Mem0 implement this through background processes that replay experiences and prune redundant data. Synaptic pruning for AI agents, running locally on whatever GPU acceleration you have available.
Cross-referenced against community lists like Awesome-Agent-Memory and Awesome-GraphRAG, the catalog reveals a maturing ecosystem. The infrastructure layer is crowded with vector databases (FAISS at 33K stars, Milvus at 43K, Qdrant at 22K). But the real action is upstream. Tools like Zep and Mem0 are building memory services that sit on top of these databases, solving specific problems in personalization and long-horizon task reasoning. Microsoft's GraphRAG, with 20K stars, shows how much interest there is in structured retrieval beyond simple similarity search.
The boundaries between these projects stay fuzzy. Memory management, retrieval augmentation, and knowledge graphs overlap in ways that make choosing a stack feel like guesswork. This catalog at least gives you a map.