The most interesting thing about What's That? isn't that it can identify a Terracotta Warrior or narrate a plate of street food in seconds. It's that the AI doing all of that is completely invisible to the person using it.

That invisibility is a design choice. Cagkan Acarbay, a solo developer working under Cha Labs SIA, built the iOS app around a deliberate premise: the user never writes a prompt, never picks a model, never thinks about the pipeline. They point a phone at something interesting, take a photo, and within ten seconds an audio narrative is playing. The interface is a camera. The product is a story.

It's a working example of a shift playing out across the consumer LLM space — tools arriving not as chatbots or copilots, but as invisible infrastructure underneath something that looks like a normal app. Acarbay's pipeline chains three AI subsystems: a vision layer that identifies the subject, a language model that generates a style-adapted narrative, and a text-to-speech output that reads it aloud. None of that machinery surfaces to the user. The specific models and providers behind the ten-second latency target aren't disclosed.

Personalization is the app's secondary pitch. Before heading out, users configure preferences across four narrative modes: historical context, human stories, design and craftsmanship, sensory immersion. Those settings shape every story the app generates — the same Terracotta Warrior can be framed as a study in ancient military logistics or a portrait of the craftsmen who gave each figure an individual face, depending on what the user asked for at setup. Acarbay says the app also refines its model of a user's curiosity over time, making future stories more targeted. Identified subjects accumulate in a personal gallery, functioning as a visual journal of encounters.

What's That? arrives in a travel category already crowded with AI-powered tools, from itinerary generators to real-time translation layers. Its distinction is less about the underlying capability — multimodal recognition and contextual narration are increasingly table stakes — and more about the interface decision to keep the AI entirely out of sight. Whether ambient intelligence beats visible AI in the travel vertical is still an open question, but Acarbay is making a clear bet on which side tourists will prefer.