ENTERPILOT just released GoModel, an open-source AI gateway written in Go that puts 10+ LLM providers behind a single OpenAI-compatible API. OpenAI, Anthropic, Gemini, xAI, Groq, OpenRouter, Ollama. They're all there. You swap models by changing a single parameter, not rewriting integration code.
Teams like HOCKS AI are already routing chat tasks to OpenRouter and vision tasks to Gemini through the gateway. Similarly, Imbue runs 100+ Claude agents in parallel for automated testing. One HN commenter raised a good question about how the two-layer caching system (exact-match and semantic) handles cache invalidation when upstream models update. The project includes an admin dashboard for tracking usage and costs across providers, which matters when you're juggling free and paid tiers.
The 44x lighter claim versus LiteLLM is the attention grabber. LiteLLM is Python-based and widely used. Go's compiled nature and small memory footprint make this believable for teams running lightweight infrastructure. xgotop provides near real-time visibility into Go runtime behavior, which complements the lightweight design goal. Still, if you want a small, fast gateway container that sips resources, GoModel is worth a look.
GoModel deploys via Docker with environment variables for provider credentials. It supports streaming, batch processing, file uploads, and passthrough routes for provider-specific features. The code is open source on GitHub. For teams watching their infra costs creep upward, a gateway this light could mean running one more service on an existing box instead of provisioning new hardware.