What is Intent Engineering?
Intent Engineering is the practice of translating user problems and product goals into structured specifications that both humans and AI agents can act on without ambiguity.
Traditional product workflows rely on tickets, PRDs, and verbal handoffs — formats that assume a human will fill in the gaps. AI coding agents cannot fill those gaps. They need specifications that state exactly what should happen, what success looks like, and where the boundaries are.
Intent Engineering closes this gap. Instead of writing a Jira ticket that says "improve checkout speed," an intent-engineered spec defines the user friction, the measurable outcome, the constraints, and the verification criteria — all in a format an agent can execute without asking follow-up questions.
Why it matters now
The rise of AI coding agents — Claude Code, Cursor, GitHub Copilot — has shifted the bottleneck in software development. Writing code is no longer the hard part. Deciding what to build and specifying it precisely enough for an autonomous agent is.
Teams that write precise intent get dramatically better output from AI agents. Teams that prompt with vague descriptions get hallucinated features, wasted tokens, and manual rework that erases the speed gains AI was supposed to deliver.
This is not a tooling problem. It is a specification problem. Intent Engineering treats specification as a first-class engineering discipline, not an administrative chore that happens in a ticket tracker five minutes before sprint planning.
A concrete example
Consider a real scenario: your analytics show that 35% of users abandon your onboarding flow at step 3 of 5.
Without intent engineering, the ticket says: "Fix onboarding drop-off." An AI agent receiving this has to guess what "fix" means, which step to focus on, what success looks like, and what it's allowed to change. It might restructure the entire flow, remove required fields, or make cosmetic changes that don't address the underlying problem.
With intent engineering, the spec says:
- Objective: Users abandon onboarding at step 3 (address entry) because the form requires 6 fields when auto-complete could reduce it to 1. This costs ~200 signups/week.
- User Goal: Complete onboarding in under 90 seconds without manually entering a full address.
- Outcomes: Address entry step uses a type-ahead geocoding field. The 6 individual fields are replaced with a single search input. Users who select a suggested address proceed to step 4 in under 10 seconds. Drop-off at step 3 decreases by at least 40%.
- Edge Cases: If the geocoding API is unavailable, fall back to the existing manual form. Addresses outside the supported regions show a clear "not yet available" message. PO Box addresses must still be enterable manually.
- Verification: Integration test confirms geocoding lookup returns results for 5 sample addresses. Fallback test confirms the manual form renders when the API returns a 500. Load test confirms p95 response time under 300ms.
The agent now has everything it needs. No guessing, no hallucinating features, no follow-up questions.
How it differs from traditional spec writing
| PRD | User Story | Prompt Engineering | Intent Engineering | |
|---|---|---|---|---|
| Focus | Feature description | User desire | Single AI interaction | Structured user problem |
| Defines success | Implicitly | Acceptance criteria | ✗ Not at all | ✓ Measurable outcomes |
| Handles edge cases | ✗ Rarely | ~ Sometimes | ✗ Never | ✓ Explicitly |
| Grounded in evidence | ~ Occasionally | ~ Varies | ✗ No | ✓ Always |
| Machine-executable | ✗ No | ✗ No | ~ Partially | ✓ Yes |
| Verifiable | By humans | By humans | ✗ No | ✓ By tests + agents |
| Durability | Document that drifts | Ticket that closes | Ephemeral conversation | ✓ Versioned artifact |
A PRD says "add a dashboard." An intent spec says "users can't find their weekly metrics — surface them in under 2 clicks with less than 500ms load time." A user story captures what someone wants. An intent spec adds why it matters (evidence), what "done" means (outcomes), and what must not break (edge cases). A prompt optimizes a single AI interaction. An intent spec structures the entire problem so any agent can execute it correctly.
The six parts of an IntentSpec
Every intent spec follows a consistent structure:
1. Objective — What user problem are you solving and why does it matter? Grounded in evidence: a support ticket, a metric, a user quote, a behavioral signal. Not "we think users want X" but "we observed users struggling with Y."
2. Outcomes — Observable, measurable state changes. Not "improve performance" but "p95 response time under 200ms." Every outcome should be verifiable by a test or a metric.
3. Evidence — Linked real user signals — friction points, quotes, observations, metrics, feature requests. If you can't link to actual user evidence, you shouldn't be building it.
4. Constraints — Hard boundaries the implementation must respect — security requirements, architectural limits, business rules. What the agent must NOT do.
5. Edge Cases — Boundary conditions, failure modes, and scenarios that could break the implementation. What happens when the network is down? What about empty states? What if the user has 10,000 items instead of 10?
6. Verification — How do you confirm it works? Unit tests, integration tests, manual checks. This closes the loop — the agent knows exactly what "done" looks like and can verify its own work.
How teams practice intent engineering
Intent engineering is not a one-time documentation exercise. It is a workflow that connects user research to shipped software:
1. Capture friction. Collect signals from support tickets, user interviews, analytics drop-offs, NPS comments, and session recordings. These are the raw inputs.
2. Structure into specs. Take the strongest friction signals and write intent specs using the six-part format. Each spec should trace back to real evidence, not assumptions.
3. Execute with agents. Hand the spec to an AI coding agent. A well-written spec should produce a working implementation on the first pass — or surface specific questions where the spec was ambiguous.
4. Verify and ship. Use the verification criteria in the spec to confirm the implementation works. If it doesn't, the problem is usually in the spec, not the agent.
5. Close the loop. After shipping, check whether the outcomes you specified actually materialized. Did the drop-off decrease? Did the response time improve? This evidence feeds back into the next round of specs.
Who needs intent engineering
Intent engineering is most valuable for product teams that use AI coding agents as part of their development workflow. This includes:
- Product managers who write specs that AI agents will execute
- Engineering leads who want consistent, reviewable specifications across their team
- Founders and solo builders who use AI agents to move faster without sacrificing precision
- Design teams who need to translate user research into actionable development specs
If you are handing work to an AI agent and the output is unpredictable, the problem is almost certainly in the specification. Intent engineering is the fix.