Imagine trying to stop a weapons-deployment AI by adding a markdown file instructing it to "feel sad." That's the hypothetical software engineer Onat Mercan reaches for in a March 14 opinion post — and it's a more precise diagnostic of the PERSONALITY.md trend than it first appears.
Mercan's argument is simple: developers who write prompts like "You are a senior software engineer" are not changing the nature of an LLM. They're prepending text to a context window. A model's actual capabilities, reasoning, and dispositions are fixed at training time. Changing them requires additional training, RLHF, or fine-tuning — not a markdown file checked into the repo. What the field calls a model's "personality" is, in his framing, a block of text being relayed between a user's machine and a shared GPU cluster for the duration of a session, nothing more.
He coins a term for the broader pattern: "Artificial Artificial Intelligence." The claim isn't just that personality files don't work as advertised — it's that the entire discourse around agent identity reflects a <a href="/news/2026-03-14-beej-hall-ai-code-authorship">category error</a>. Practitioners mistake context for cognition, surface mimicry for moral or reasoning capability.
The sharpest line in the piece isn't about prompt engineering at all. Mercan notes that while Hacker News and AI Twitter spin in a closed loop of doom and hype, most actual users just ask ChatGPT questions they once would have Googled. That's not a new observation, but it lands differently aimed at agent-framework builders specifically — the people most invested in treating prompt conventions as architectural decisions.
The piece is short, deliberately light on technical nuance, and built for rhetorical impact rather than comprehensiveness. That's fine — it makes the core case cleanly. The harder question it doesn't address is whether context-level behavioral shaping has measurable engineering value even if it doesn't alter underlying model nature. Plenty of practitioners would argue it does. But Mercan isn't wrong that the field has been <a href="/news/2026-03-14-ai-did-not-simplify-software-engineering">sloppy about the distinction</a>, and the PERSONALITY.md convention is a reasonable place to start demanding more rigor.