You've been writing the wrong document.
The PRD was designed for a world where a PM writes a spec, hands it to engineering, and waits. That handoff created a gap — so the PRD got longer. More context. More mockups. More "background" sections nobody reads.
Intent specs work differently. They're short, testable, and linked to real user evidence. And they turn out to be better alignment artifacts than the 5-page doc they replace.
A PRD tells the team what you want. An intent spec shows them what the user needs and how you'll know it's done.
What Stakeholders Actually Need to See
When your VP asks "what are we building next?" or your designer asks "what problem are we solving?" — they don't need your implementation notes. They need three things:
1. The objective — What user problem are we solving, in one sentence?
2. The outcomes — What will be true when this ships? (Not what we'll build — what will change.)
3. The edge cases — What could go wrong, and have we thought about it?
That's it. An intent spec contains exactly these three things, plus verification criteria that make "done" unambiguous.
Compare:
| PRD | Intent Spec |
|---|---|
| 5–15 pages | Half a page |
| Background, goals, user stories, wireframes, technical notes, open questions | Objective, outcomes, edge cases, verification |
| Read time: 20+ minutes | Read time: 2 minutes |
| "What does paragraph 3 on page 7 mean?" | Everything fits on one screen |
| Updated quarterly (maybe) | Updated with every conversation |
The 10-Minute Spec Review
Here's the workflow that replaces your 45-minute PRD walkthrough:
Step 1: Share the Spec Before the Meeting
Send the intent spec link 24 hours before. It takes 2 minutes to read. Everyone arrives having already read it. (This never happens with PRDs because nobody reads 12 pages before a meeting.)
Step 2: Start With the Evidence
Don't start with "here's what we're building." Start with "here's what users are experiencing."
The intent spec links to its source evidence. Open the evidence cluster. Show the friction items, the quotes, the metrics. Spend 3 minutes here. This is where alignment actually happens — when the room agrees on the problem.
Step 3: Walk the Outcomes
Read each outcome aloud. For each one, ask: "If this is true after we ship, are we satisfied?"
This is a different question than "do you like this feature?" Outcomes are testable propositions. The team either agrees that these outcomes solve the problem or they don't. There's no room for "I think the button should be blue" because the outcomes don't specify buttons.
Step 4: Stress-Test the Edge Cases
Edge cases are where alignment falls apart in traditional processes. The PM assumed one thing, engineering assumed another, and you discover the disagreement three days before launch.
In an intent spec, edge cases are explicit. Walk through each one. If someone raises a new edge case — add it right there in the meeting. The spec is live, not a frozen PDF.
Step 5: Ship the Decision
At the end of 10 minutes, the spec is either approved, updated with new edge cases, or sent back for more evidence. No action items. No follow-up doc. The spec IS the document of record.
Verification Criteria Replace "Definition of Done" Debates
Every team has had this argument: "Is this done?" "Well, it depends on what you mean by done."
Intent specs eliminate this by including verification criteria — concrete, testable statements that define what "shipped" means:
- "User can export a CSV in under 5 seconds for datasets up to 10,000 rows"
- "Error state shows a retry button with the specific failure reason"
- "New users reach their first successful export within 3 steps"
These aren't acceptance criteria written by engineers. They're written during spec design, informed by the evidence. When engineering delivers, you check the list. Done means done.
Handling Pushback
Stakeholder pushback is not a problem. It's the whole point.
When someone says "I don't think this outcome is right," they've just given you an edge case or a constraint. Add it to the spec. The spec gets better.
When someone says "we should also add X," check the evidence. Is there evidence supporting X? If yes, it might belong in this intent or in a new one. If no, it's an assumption — and assumptions don't make it into intent specs.
This is how you say "no" without saying "no." The evidence speaks for you.
💡 Tip: When a stakeholder pushes a pet feature with no evidence, don't argue. Say "Let's add that to the Evidence Board and see if user data supports it." Either the evidence appears and you build it, or it doesn't and the conversation ends naturally.
Anti-Patterns
❌ The PRD in Disguise: Writing a 3-page intent spec with "Background" and "Technical Considerations" sections. If it doesn't fit on one screen, you're writing a PRD with a new name.
❌ The Rubber Stamp Review: Sharing the spec in a meeting where nobody pushes back. If everyone agrees immediately, either the problem is trivial or people didn't read the spec. Probe.
❌ The Spec Hoarder: Writing intent specs but never sharing them, then building in isolation. The spec is an alignment tool. If nobody else has seen it, it's just your notes.
❌ The Scope Creep Meeting: Starting a spec review and ending with 15 new requirements bolted on. Each piece of feedback is either an edge case (add it), a new intent (create a separate one), or an opinion without evidence (park it).
Next Steps
- Pick your next feature discussion. Instead of writing a PRD, share the intent spec 24 hours before.
- Run a 10-minute review. Follow the five steps above. Time it.
- Compare. Was alignment faster? Were the decisions clearer? Did "done" become unambiguous?