A developer has released an app called Origins that claims to identify a user's ancestral roots from a selfie. The project appeared on Hacker News this week, where it pulled 1 point and a single comment the community flagged dead. Nobody engaged.

The app is hosted on Google Sites and sits behind a Google sign-in wall. No one outside the developer can see how it works. There is no model card, no description of training data, and no explanation of how the system maps facial features to ancestry predictions.

That opacity matters for a specific reason. Facial appearance and genetic ancestry are not the same thing. People with identical ancestry can look quite different; people who look similar can have entirely different genetic backgrounds. Visible phenotype is a noisy proxy for ancestry, and the relationship gets noisier in populations with complex admixture histories — which is most of the world. No specific methodology, peer review, or external validation exists to suggest Origins accounts for any of this. The app discloses nothing — <a href="/news/2026-03-14-ai-chatbots-health-records-hipaa">a common challenge as AI systems access increasingly sensitive data</a> — so there is no way to check.

This is not to say the app is definitely wrong or harmful — it is simply impossible to evaluate from the outside. What is available: a developer-built project, a sign-in gate, and a dead Hacker News post. Until the developer publishes documentation or access opens up, Origins is a black box making claims that warrant scrutiny it has not yet invited.