Someone got Llama.cpp running on an SGI Power Challenge from 1995. MIPS R8000 processor. IRIX operating system. This machine was built for scientific computing back when Apollo 11 Guidance Computer code was already ancient. Getting this working wasn't simple. The original SGI MIPSpro compilers only support pre-C++11 standards, while Llama.cpp relies on modern C++ features. Port a current GCC to IRIX, or patch extensively. Pick your poison. Then there's the ABI question: IRIX supports 32-bit (o32), 64-bit (n64), and intermediate (n32) binary interfaces. Pick wrong and nothing links. The MIPS R8000 also lacks modern SIMD instructions like AVX, so the build falls back to scalar code paths or requires hand-tuned MIPS IV assembly. This sounds like a stunt. It kind of is. But it proves something real about inference efficiency. optimization techniques can make large language models run on hardware that predates the deep learning era by nearly three decades. If a 30-year-old SGI box can churn through tokens, your edge device probably can too.