A developer named Jordan Hornblow published a breakdown of Suno prompting this week that's worth reading for anyone building agent pipelines that touch generative media. His background isn't independently verifiable and the post doesn't link to systematic testing — these are assertions, not demonstrations — but the underlying argument stands on its own terms.

The core claim: Suno doesn't follow instructions. It pattern-matches against its training distribution. A prompt like "808 slides triplet hi-hats bell melody autotune rap" will consistently outperform "a dark atmospheric trap song with professional sound design" because the first set of tokens maps directly to structures the model has seen in training data. Hornblow's heuristic is to think like a music producer briefing a session musician: lead with rhythm section, then melody and vocals, because earlier tokens appear to carry more weight in shaping the output. Arrangement markers — "bridge," "breakdown," "outro" — can also extend tracks beyond Suno's default 1–2 minute range, which matters if you're generating audio for anything with commercial requirements.

The recommended workflow is iterative: batch-generate from a single prompt, select the strongest output, then extend or remix. That loop maps naturally onto an agent pipeline and is probably how most production use of Suno already works.

Beyond the Suno-specific advice, Hornblow generalises the principle across modalities. Code models, he says, respond better to function signatures and edge-case examples than to prose descriptions of intent. Image models respond to composition and lighting cues. If you're designing an agent that orchestrates across code, image, and audio generation, the same question applies at every node: what does this model's training distribution make salient? That's a more useful frame than maintaining a library of domain-specific prompt recipes.

Whether the specific claims about token weighting would hold up under systematic testing is an open question — the post doesn't try to answer it. But most prompt engineering writing defaults to cargo-cult specificity rather than first-principles reasoning, and this at least attempts the latter.