A developer who goes by kapitanluffy published a short document on GitHub this month that has been circulating in engineering channels: a four-principle framework for teams who have spent the past year leaning on AI coding agents and are starting to reckon with what they've built.
The document is called The Vibe Principles, and its central claim is uncomfortable. AI has made writing code nearly free, but it has done nothing to reduce the cost of figuring out what to build, why to build it, or how to structure it so the system doesn't become a liability in six months. Teams that have treated agent-assisted generation as a shortcut for all three are now finding out the difference.
The framework spells out VIBE — Value over Velocity, Intent before Implementation, Build the Right Foundations, Evolve the System. The first two principles target the same upstream failure: features are getting built because generating them costs almost nothing, not because anyone established they should exist. Prompting a feature into existence is being mistaken for understanding why it should exist. The second pair addresses what follows. AI agents don't fix broken abstractions unprompted — they scaffold on top of them. A development culture permanently tilted toward new features leaves existing systems progressively weaker.
The document credits a tweet by developer Dax Raad as a catalyst. Raad had put plainly what many engineers were quietly observing: AI has collapsed the cost of code generation while leaving the cost of judgment entirely unchanged. The Vibe Principles extend that observation into a working framework, naming the specific failure modes that result from confusing the two.
Not everyone finds the framing novel. Some engineers have pointed out that the four principles amount to conventional software discipline restated for an AI context, and that the real culprit is poor product management rather than AI agents behaving as designed. The agents do what teams tell them to do, goes the argument — if those teams are building the wrong things faster, that's a prioritisation problem, not an AI one.
Kapitanluffy flagged one piece of context in the repository: the document itself was written with AI assistance. It's a deliberate choice — a small acknowledgement that the framework isn't a rejection of AI-assisted development, but an argument for how to use it without outsourcing the parts that still require a human to think.