Y Combinator just published their Spring 2026 Requests for Startups. One of the seven ideas: Cursor for Product Managers.
Read past the headline and the description is sharp. Writing code is only part of building a product people want. The most important part is figuring out what to build in the first place. There's no system that supports the full loop of product discovery. Teams need a tool where you upload customer interviews and usage data, synthesize what's happening, and get structured definitions that agents can act on.
We agree with almost every word. But the name reveals two mistakes that will send builders in the wrong direction.
Mistake 1: It's Not a PM Tool
"Cursor for PMs" frames this as a persona tool—a better instrument for a specific role. Productboard for the AI era. Aha! but smarter.
But the problem the RFS describes isn't a PM problem. It's a system problem.
Engineers need to understand why they're building something before they build it. Founders need structured thinking to decide what's worth the next six weeks. Agents need explicit constraints and success criteria or they hallucinate the requirements. The gap between user friction and working software isn't owned by PMs. It's owned by everyone—or more often, by nobody.
Calling it "Cursor for PMs" puts it on the PM's desk. But the reason the problem exists is that intent lives in people's heads instead of in a system. A PM tool doesn't fix that. A PM tool gives one person a better way to hold it in their head.
What the RFS actually describes is infrastructure. A layer in the stack that sits between signal (customer calls, support tickets, usage data) and execution (Cursor, Claude Code, agents). Not a tool for a role. A system that any role—or any agent—can draw from.
Cursor isn't "VS Code for senior engineers." It's a new kind of editor for everyone who writes code. The thing the RFS describes shouldn't be "Productboard for PMs with AI." It should be a new layer in the stack for everyone who decides what gets built.
Mistake 2: Recommendations Are Not Contracts
The RFS imagines a tool that takes customer interviews and usage data and outputs "the outline of a new feature complete with an explanation based on customer feedback as to why this is a change worth making."
That's a recommendation. And recommendations are where product decisions go to die.
We've all been in the meeting. Someone presents a slide: "Based on user research, we recommend improving the checkout experience." Everyone nods. The ticket gets filed. The agent—or the engineer—reads "improve checkout UX" and builds the happy path. The actual problem—that users don't trust the payment step—never makes it into the implementation because a recommendation doesn't encode it.
This is why Jira tickets fail AI agents. The ticket is a recommendation, not a contract. A human engineer might ask "wait, what do you mean by 'improve'?" An agent won't. It'll read the recommendation literally and ship something that matches the words but misses the intent.
What agents need isn't a recommendation. It's a contract. Explicit objectives. Hard constraints. Edge cases. Verification criteria. The difference between "we should improve checkout" and a structured spec that defines what improvement means, what can't change, what happens when it fails, and how you'll know it worked.
The RFS gets the input right—customer interviews, usage data, product feedback. But it gets the output wrong. The output shouldn't be "the outline of a new feature." It should be a structured intent spec that an agent can execute against without a human in the loop explaining what the recommendation actually meant.
What This Actually Looks Like
If you fix both mistakes—make it a stack layer instead of a persona tool, and produce contracts instead of recommendations—you get something specific:
Start from friction, not features. The input shouldn't be "what feature should we build?" It should be "where does the user experience break?" Support tickets, research transcripts, session recordings, analytics—these are raw material. Features are conclusions. Conclusions without evidence are opinions with deadlines.
Structure intent, not summaries. The output shouldn't be a feature recommendation or an "outline." It should be a structured spec with explicit objectives, constraints, edge cases, and success criteria—something a human can review and an agent can execute against. Not a narrative that sounds convincing. A contract that's testable.
Make the "why" persistent. Every spec should trace back to the friction that justified it. Not in someone's head. Not in a deleted Slack thread. In the system. So when the third engineer joins in month two, or when a new agent picks up the task, the context survives. This is what prevents the Vibe Coding Hangover—not better prompts, but persistent intent.
Close the loop. After shipping, the question isn't "does this feel good?" It's "did the friction decrease?" If you can't measure whether a feature solved the problem it was built for, you're navigating by vibes—regardless of how sophisticated your tools are.
The spec is becoming the product. Not the code. Not the recommendation. The structured definition of intent that precedes all of it. The system that produces those specs—grounded in evidence, structured for agents, traceable to user friction—is the Intent Layer. That's the thing the RFS is really asking for, even if the name points somewhere else.
The Real Signal
The best thing about the YC RFS isn't the name. It's the recognition that the bottleneck has moved upstream. The hardest part of building products isn't writing code anymore. It's deciding what to build, grounding that decision in evidence, and defining it precisely enough that agents can act on it without asking a human what you meant.
That's exactly right. And it's exactly why the framing matters.
Founders who build "Cursor for PMs" will build a persona tool that outputs recommendations. Founders who build what the RFS describes will build a stack layer that outputs contracts. Same essay, two completely different products. The description is the opportunity. The name is the trap.
The question was never "how do we help PMs work faster?"
It was always "how do we close the gap between user friction and AI execution?" That's not a PM tool. It's infrastructure. And the output isn't a recommendation. It's a spec.
Don't Just Write Code. Define Intent.
Turn user friction into structured Intent Specs that drive your AI agents.
Get Started for Free