Linear's Karri Saarinen recently wrote that design is more than code. His argument: the industry is collapsing the nuanced work of designing the problem into the tactical act of designing the solution. We're shipping faster than ever while understanding less than ever about what we're shipping.
He's right. And the problem is worse than he describes.
Because it's not just that teams choose to skip problem definition. It's that the entire toolchain rewards them for skipping it. Every tool in the modern stack accelerates execution. Nothing enforces the thinking that should precede it.
The Asymmetry
Look at how much infrastructure exists for the execution side of product development:
- Linear structures work into projects, cycles, and priorities.
- GitHub versions every change, enforces review gates, and tracks history.
- Vercel deploys on every merge. CI/CD pipelines catch regressions automatically.
- Cursor and Claude Code generate working code from natural language in seconds.
The rigor here isn't cultural—it's structural. You can't merge without a PR. You can't ship without a passing build. You can't deploy without the pipeline. Infrastructure forces discipline, even on teams that wouldn't maintain it voluntarily.
Now look at the intent side—the work of figuring out what to build and why:
- A Google Doc, half-finished, last edited three weeks ago.
- A FigJam board covered in sticky notes from a brainstorm nobody summarized.
- A Slack thread where the PM, designer, and tech lead reached a conclusion that was never recorded anywhere.
- A Jira ticket that says "Improve checkout UX."
There's no structure here. No versioning. No review gate. No equivalent of "you can't merge without a PR" for "you can't build without a defined problem." The entire front end of product development runs on culture, goodwill, and the hope that someone in the room is paying attention.
When execution was slow, this asymmetry was tolerable. Defining the problem might take a week, but building the solution took months. There was time for the messy thinking to settle into shared understanding through conversations, standups, and code reviews.
AI has removed that buffer. When execution is instant, there's no settling period. The gap between "someone had an idea" and "the code is in production" can be measured in hours. And every hour of that gap that should have been spent on problem definition gets compressed into a prompt.
What Gets Lost
Karri identifies the cost clearly: teams start optimizing solutions before verifying problems. But the downstream effects are more specific than that, and worth naming.
You lose the "why." When a feature ships without a structured problem definition, nobody can explain why it exists six weeks later. The rationale lived in the conversation that preceded the prompt. The conversation wasn't recorded. The prompt was discarded. The code is the only artifact—and code tells you what was built, never why.
You lose the edge cases. Problem definition is where edge cases surface. When you skip it, edge cases get discovered in production—by users, not by the team. An AI agent asked to "add a checkout flow" will build the happy path beautifully. It won't add rate limiting, handle expired sessions, or account for the payment provider's webhook failures—because those constraints weren't in the prompt, and they only emerge when you design the problem before designing the solution.
You lose cross-functional alignment. A PM, a designer, and an engineer all have different mental models of "improve checkout UX." Without a structured artifact that forces them to reconcile those models, each person optimizes for their own interpretation. The designer creates a beautiful flow. The engineer optimizes for performance. The PM wanted conversion rate improvement. All three ship something. None of them ship the same thing.
You lose the ability to verify. If you never defined what success looks like, you can't measure whether you achieved it. Post-launch, the team looks at dashboards and asks "is this working?"—but "working" was never specified. So they look at whatever metrics are trending in the right direction and declare victory. The actual user friction that motivated the work may be unchanged.
Culture Alone Won't Fix This
The instinct is to solve this culturally. Write better docs. Have more design reviews. Encourage teams to think before they build. Karri gestures in this direction—keep "consideration" alive as a value.
I agree with the value. I'm skeptical of the mechanism.
Culture is what you practice when nobody's enforcing it. And cultural disciplines erode under pressure. When the CEO wants a feature by Friday, when the competitor just shipped something similar, when the AI can have a working prototype in twenty minutes—"let's spend two days on problem definition" feels like a luxury.
This is exactly the dynamic that played out with code quality. For years, the industry said "write tests" and "do code review" and relied on culture to make it happen. It didn't work—not because people disagreed, but because deadlines always won. What worked was infrastructure: CI/CD pipelines that refuse to merge untested code. The discipline became structural, and suddenly everyone had test coverage.
Problem definition needs the same structural enforcement. Not because teams don't value it, but because without infrastructure, it's the first thing that gets cut when execution is fast and deadlines are real.
What Infrastructure for Intent Looks Like
If the execution side of development has Linear, GitHub, and Vercel, what does the intent side need?
A structured artifact, not a document. Problem definition today lives in prose—Google Docs, Notion pages, Confluence wikis. Prose is flexible, which makes it great for exploration. But flexibility means there's no enforced structure. You can write a "spec" that's missing constraints, edge cases, and success criteria, and nothing in the system will flag it. The intent artifact needs to be structured: explicit objectives, explicit constraints, explicit verification criteria. Something a human can review and an agent can execute against.
Versioning and history. Code is versioned. Designs are versioned. Intent is... overwritten in a Google Doc with no changelog. When the team revisits a decision three months later, they can git blame the code but they can't trace why it was built that way. Intent needs the same temporal discipline that code has: versions, diffs, and a record of how understanding evolved.
A review gate. The most powerful piece of infrastructure in modern development is the pull request. Not because it catches bugs—code review is mediocre at that—but because it forces a pause. Someone has to look at the change before it ships. Intent needs an equivalent gate: no handoff to execution until the problem definition has been reviewed against explicit criteria. Not a rubber stamp. A structural checkpoint.
Traceability to user friction. The input to problem definition shouldn't be "what does the PM think we should build?" It should be "what friction are users actually experiencing?" Support tickets, research transcripts, analytics—these are the raw material. The spec should trace back to this evidence, not to someone's intuition.
The Machine for Designing the Problem
Karri is right that design is more than code. But framing it as a mindset problem underestimates how structural the issue is.
The industry didn't get disciplined about execution because thought leaders wrote essays about the value of testing. It got disciplined because the toolchain made rigor the default. CI/CD didn't ask teams to value quality—it made quality a gate that couldn't be bypassed.
We need the same thing for intent. Not essays about the importance of thinking. Infrastructure that makes thinking a first-class, structured, versioned, reviewable artifact in the development workflow.
Linear gave us the machine for building the solution. We still need the machine for designing the problem.
Don't Just Write Code. Define Intent.
Turn user friction into structured Intent Specs that drive your AI agents.
Get Started for Free