Nate's newsletter nailed an observation that's been brewing all year: "prompting" is no longer one skill. When agents ran inside a chat window, prompting was simple — you typed, you saw the output, you adjusted. Now agents run autonomously for hours. The feedback loop is gone. And the industry's response has been to fragment "prompting" into four sub-disciplines: specification engineering, constraint architecture, problem statement rewriting, context engineering.
Here's the problem with that framing: when a skill fragments into four sub-skills, that's usually a sign it's about to be replaced, not refined.
The Wrong Unit of Work
Every framework for "better prompting" shares an assumption: that the conversation is the unit of work. You talk to the agent. The agent responds. You refine. The skill is in the talking.
That assumption broke in February 2026.
When Claude Code runs for 35 minutes on a feature branch, there is no conversation. When a Cursor agent executes across twelve files, it doesn't stop to ask you what you meant. The feedback loop that made prompting work — type, see, adjust — is gone. You don't see the output until the agent has already committed to an approach.
The failure mode here isn't "bad prompt." It's "missing specification." The agent executed exactly what you said. You just didn't say enough. And by the time you noticed, it had spent 35 minutes building confidently in the wrong direction.
Optimizing how you talk to the agent doesn't fix this. The unit of work is no longer a conversation turn. It's a specification artifact.
Four Skills or One Gap?
The 4-skill framework — specification engineering, constraint architecture, problem statement rewriting, context engineering — isn't wrong. These are all real things teams need to do. But treating them as separate disciplines obscures what they actually are: four symptoms of the same gap.
You didn't define intent before execution.
An intent spec already contains all four natively. The specification is in the outcomes. The constraints are in the edge cases. The problem statement is the objective. The context is the evidence. It's one artifact, structured once, before the agent starts.
The 4-skill framing says "learn to talk to agents in four different ways." Intent engineering says "stop talking and start specifying."
What This Looks Like in Practice
A team I work with had a classic problem: their dashboard loaded in 4.2 seconds. Users complained. The PM wanted it fixed.
The "4 skills" approach would have the developer craft a careful prompt: specify the performance target (specification engineering), define what can't change (constraint architecture), reframe from "make it faster" to a concrete problem (problem rewriting), and provide relevant codebase context (context engineering). Four skills, one prompt, sent to the agent.
The prompt might look like this:
"Optimize the dashboard to load under 2 seconds. Don't change the data model or API contracts. The main bottleneck is the analytics widget which makes 6 sequential API calls. The dashboard uses React with SWR for data fetching. Focus on the analytics section first."
That's a good prompt. Genuinely. It'll produce decent code.
The intent engineering approach skips the prompt entirely and writes a spec:
- Objective: Dashboard p95 load time is 4.2s. Users in the weekly NPS survey cite "slow dashboard" as the #1 frustration. Analytics widget makes 6 sequential API calls that account for 3.1s of the total load.
- Outcomes: Dashboard p95 load time under 1.5s. Analytics widget loads in a single parallelized request or uses server-side aggregation. Perceived load time under 800ms via skeleton UI. No regression in data accuracy.
- Constraints: No changes to the underlying data model or public API contracts. Must remain compatible with the existing SWR caching layer. Cannot increase the monthly Vercel bill by more than 15%.
- Edge Cases: Dashboard with zero data (new accounts) must still load under 1.5s. Widget must degrade gracefully if one of the six data sources is unavailable — show partial data, not a spinner. Time-range selector changes must not trigger a full page re-render.
- Verification: Lighthouse CI in the deploy pipeline confirms p95 under 1.5s across 10 runs. Integration test confirms partial data rendering when one source returns 500. Visual regression test confirms skeleton UI renders within 200ms.
The prompt gave the agent a task. The spec gave the agent a contract. The difference shows up in what happens after the first pass: the prompt-driven agent will produce something that's probably faster but needs three rounds of review to catch the edge cases. The spec-driven agent either meets the contract or it doesn't — and the verification criteria tell you which.
The 35-Minute Wall
The newsletter calls it the "35-minute wall" — the point where an autonomous agent has been running long enough that correcting its trajectory costs more than starting over. This is real, and it's the clearest evidence that prompting as an interface is hitting its limits.
When you prompt interactively, the wall doesn't exist. You course-correct continuously. Every response is a checkpoint.
When the agent runs autonomously, the wall is the cost of ambiguity. Every requirement you left implicit, every constraint you assumed was obvious, every edge case you didn't enumerate — the agent makes a decision about each one. Usually the wrong one. And those decisions compound across 35 minutes of autonomous execution.
The solution isn't better prompts at the start. It's a specification that makes the ambiguity impossible. A contract the agent can execute against without making judgment calls about what you meant.
Why the Fragmentation Is a Signal
Skills fragment when the underlying paradigm is under stress. "Web development" fragmenting into front-end, back-end, and DevOps was growth — each discipline deepened. "Prompting" fragmenting into four sub-skills isn't the same pattern. It's a skill trying to cover a surface area it was never designed for.
Prompting was designed for interactive, conversational AI. You talk. It responds. You adjust. The entire paradigm assumes a human in the loop, evaluating every output, providing continuous feedback.
Autonomous agents break that paradigm. And when the paradigm breaks, the skill that served it doesn't evolve — it gets replaced by something at a higher level of abstraction. Not "prompt better in four ways" but "specify once and verify."
The industry is converging on this from different angles. Tobi Lutke calls it context engineering. Nate maps it as four distinct competencies. We've been calling it intent engineering. The name doesn't matter. What matters is the recognition: when building is cheap, specification is the bottleneck. And a bottleneck isn't solved by optimizing the thing that came before it.
Your prompts aren't failing because you haven't learned the four new skills. They're failing because prompts were never designed to carry the weight of a specification.
Don't Just Write Code. Define Intent.
Turn user friction into structured Intent Specs that drive your AI agents.
Get Started for Free