A developer woke up to a €54,000 bill after someone exploited their unrestricted Firebase browser key to hammer the Gemini API for 13 hours straight. The developer, posting on the Google AI Developers Forum under the username zanbezi, had enabled Firebase AI Logic on an old project to add a simple AI feature. Their existing Firebase browser key had no API restrictions, so when that key hit the frontend, automated traffic flooded in overnight. Budget alerts were set at €80, but notifications arrived hours late. By the time anyone reacted, costs had already hit €28,000. The final tally settled above €54,000 due to reporting delays.
Google Cloud Support denied the billing adjustment request. Their reasoning: the traffic came from valid project credentials, so it counts as legitimate usage. That's despite the usage being clearly anomalous, non-human traffic concentrated in a single overnight window. The developer is now left asking if there's any escalation path they missed.
For over a decade, Google told developers that API keys for services like Maps and Firebase aren't secrets. They're embedded in frontends, shipped in client-side code. That was fine when a leaked Maps key just let someone display a map. Now that same key can call Gemini, and each call costs real money. Truffle Security covered this shift in detail: the introduction of expensive LLM inference turned formerly benign browser keys into high-value targets.
And Google's billing safeguards didn't help here. There are no hard spending caps. Budget alerts are notifications, not circuit breakers. Compare that to OpenAI and Anthropic, both of which let you set hard limits that physically suspend API access when you hit a threshold. Azure goes further, letting you tie budgets to resource groups that shut down services entirely. Google's approach assumes you'll see the alert and react in time. At LLM inference speeds, that assumption doesn't hold.