Google hit a wall with EU regulators over Gemini's photo scanning.
European authorities objected to how the AI model processes user photos. The concern is whether Google got proper consent and whether cloud-based processing was needed when local alternatives might work.
According to Hacker News commenters familiar with the feature, Gemini offers an opt-in function called "Personal Intelligence" that handles private data processing. Users choose to enable it. It's not on by default.
But opt-in alone may not satisfy European regulators.
Under GDPR Article 9, photos containing facial recognition data count as "special category" biometric data requiring explicit consent. Articles 6 and 7 demand consent be "freely given, specific, informed and unambiguous." Regulators are scrutinizing whether Google's consent mechanism actually meets these standards.
The EU AI Act piles on. Systems processing biometric data for identification fall into "high-risk" territory. That means strict transparency requirements and human oversight. If Gemini's photo analysis qualifies, Google faces a compliance burden beyond a simple consent dialog. The EDPB, which coordinates data protection enforcement across EU member states, could force Google to redesign the feature or block it entirely in European markets.
Cloud-based AI needs your data to work well. Local processing avoids the privacy headache but caps what the AI can do. Every company building agent systems faces this tension.
Google might have to strip Gemini's photo features down to local-only processing in Europe. Or build a consent flow that survives regulatory scrutiny. Either path means rethinking how "Personal Intelligence" actually works.