You gave your AI agent access to Jira. It read every ticket, understood every subtask, followed the acceptance criteria word for word.
And it still built the wrong thing.
This is happening everywhere. Teams wire up their agent to the backlog and expect magic. What they get is a faster way to ship the same broken features—because the backlog was never designed to be read by a machine.
Anatomy of a Jira Ticket
Here's a real ticket, lightly anonymized:
Title: Improve checkout UX
Description: Users are dropping off at checkout. Make it faster.
Acceptance Criteria:
- Checkout completes in under 3 seconds
- Show loading indicator during payment processing
- Handle payment errors gracefully
A human engineer reads this and thinks: I should probably talk to the PM. "Dropping off" could mean ten things. "Handle errors gracefully" means nothing until I see the edge cases.
An AI agent reads this and thinks: Three acceptance criteria. I'll satisfy all three.
The human knows the ticket is incomplete. The agent doesn't.
The Four Gaps
Every Jira ticket has the same structural holes. They don't matter when humans fill them through conversation. They're fatal when an agent fills them with assumptions.
The Context Gap. Why are users dropping off? Speed? Confusion? Trust? A broken coupon field? The ticket says "dropping off." The agent picks the first interpretation that satisfies the acceptance criteria. If the real problem is trust, you'll get a fast checkout that nobody trusts any more than the slow one.
The Intent Gap. "Improve checkout UX" is a direction, not a destination. The agent will optimize for what's measurable in the acceptance criteria—load time and a spinner—without touching the actual friction. You'll get a faster path to the same dead end.
The Constraint Gap. What can't change? Does the checkout need to work without JavaScript? Does it need to support a third-party payment widget that can't be modified? Does the order summary need to stay above the fold? The ticket is silent. The agent will make choices. You won't like all of them.
The Verification Gap. "Handle errors gracefully" is a human judgment call. The agent has no way to evaluate it. So it adds a try/catch and a generic error message and marks the criterion as satisfied. Technically correct. Functionally useless.
These gaps aren't bugs in Jira. They're features. Jira was designed for coordination between humans—a shared reminder to have a conversation. The ticket is the starting point, not the spec.
But when the agent reads it, the starting point is the spec.
The Contract Your Agent Actually Needs
The fix isn't better ticket hygiene. It's a different artifact.
| What Jira Says | What the Agent Needs |
|---|---|
| "Improve checkout UX" | Objective: Reduce cart abandonment at payment step from 23% to 15% |
| "Make it faster" | Constraint: Checkout must complete in under 2s on 3G connections |
| "Handle errors gracefully" | Outcome: Failed payments surface recoverable error type + retry action |
| (not mentioned) | Verification: A/B test shows 10%+ conversion improvement within 2 weeks |
The left column is a conversation starter. The right column is a contract.
An agent can execute a contract. It can only guess at a conversation starter. And the guesses compound: a wrong assumption about context produces a wrong assumption about constraints, which produces code that satisfies every acceptance criterion while solving nothing.
Why "Better Tickets" Won't Save You
The instinct is to write longer, more detailed Jira tickets. Add more acceptance criteria. Write better descriptions.
This helps marginally—and misses the point.
The problem isn't ticket quality. It's ticket structure. A Jira ticket is organized around what to build. An agent-ready spec is organized around what success looks like.
That's not a copywriting difference. It's an architectural one:
- Objectives replace titles. Measurable outcomes replace vague improvements.
- Constraints are explicit, not implied by team culture and tribal knowledge.
- Edge cases are enumerated, not discovered in code review.
- Verification criteria are defined before the first line of code, not after the third round of QA.
This is what an Intent Spec does. It restructures the information your agent needs from "notes for a human conversation" into "a machine-executable contract with explicit boundaries."
Your agent isn't broken. Your backlog is.
Jira tracks work. It was never meant to define it. And if your agent keeps building the wrong thing, the problem isn't the model—it's what you handed it.
A ticket is a reminder to talk to someone. A spec is the result of having talked. That's intent engineering.
Don't Just Write Code. Define Intent.
Turn user friction into structured Intent Specs that drive your AI agents.
Get Started for Free