Google built Gemini to know you. That means scanning your photos in the cloud. The EU looked at this and said no.

Under GDPR, companies need explicit consent before processing personal data for purposes beyond what users signed up for. Google scanning your photos to train AI models? That's a new purpose. Users didn't agree to that when they uploaded vacation pictures. Compliance would mean asking permission specifically, or keeping the processing on-device where GDPR has less reach. Atlassian recently confirmed it will begin training AI on customer data starting August 2026, illustrating the widespread push to utilize collected metadata for model training.

The Hacker News discussion revealed some odd gaps. Gemini scans your photo library but can't touch Gmail attachments, according to commenters. They pointed out the obvious: if you want an AI that knows you, it needs your data. Either that happens on your device or it happens on someone else's server.

This is where Google and Apple diverge. Google pulls your data into centralized servers for analysis and model training.

Apple takes a different path. Sensitive data stays on your device using the Neural Engine chip. Complex tasks go to Private Cloud Compute servers, which Apple says don't store your data and have publicly verifiable code.

The Show HN post caught flak for being a blog post instead of an actual project demo. Fair enough. The question is simple: can you build useful AI agents without massive data collection? The EU is forcing that conversation. Bernie Sanders has criticized the industry for building a surveillance state, but the technical debate around data processing continues. Google doesn't have a good answer yet.