Writing Product Specs That AI Can Actually Build
The fastest way to waste time with AI coding tools is to hand them a vague spec.
You ask for "a clean dashboard," get a generic layout, and then spend hours correcting assumptions. The issue is not that AI cannot code. The issue is that most specs do not contain enough decision-quality context.
If you want high-quality implementation from AI, you need specs that reduce ambiguity before generation starts.
Here is the format I use.
1. Start With Outcomes, Not Features
Most specs begin with a feature checklist. That is backwards.
Start with:
- user outcome
- business outcome
- success metric
Example:
- User outcome: "A founder can publish a weekly product update in under 15 minutes."
- Business outcome: "Increase weekly activation from 34% to 50%."
- Success metric: "Median time from blank state to published update under 12 minutes."
Features are implementation candidates. Outcomes are the reason those features exist.
2. Define Explicit Scope and Non-Goals
AI tools fill silence with assumptions. If you do not define boundaries, you get accidental scope creep in code.
Always include:
- in-scope workflows
- out-of-scope workflows
- known constraints (timeline, infra, compliance)
Example non-goal:
- "No mobile app in v1"
- "No multi-workspace support in v1"
- "No role hierarchy beyond owner/member"
Non-goals are quality tools, not pessimism.
3. Write User Stories With Acceptance Criteria
A user story without acceptance criteria is just a wish.
Use this structure:
- Story: As a [persona], I want [capability], so that [outcome].
- Acceptance criteria: testable, unambiguous statements.
Example:
- Story: As a content lead, I want to generate a first draft from release notes so I can publish faster.
- Acceptance criteria:
- Draft generation completes in under 20 seconds on p95.
- Generated draft includes headline, summary, and 3 key updates.
- User can regenerate any section independently.
- System surfaces source snippets used for each key update.
If a criterion cannot be tested, it is not ready.
4. Include State Models and Edge Cases
Many AI-generated implementations fail because state transitions were never defined.
For each key screen or flow, define states:
- idle
- loading
- success
- partial success
- validation error
- system error
Then define what users can do in each state.
Also include edge cases explicitly:
- missing inputs
- duplicate submissions
- timeout behavior
- interrupted network
- permission mismatch
The more state clarity you provide, the less rework you do later.
5. Specify Data Contracts Early
When specs avoid data details, implementation quality drops quickly.
You do not need final schema on day one, but you do need:
- core entities
- required fields
- field types
- ownership boundaries
- lifecycle rules
Example:
- Entity:
PostDraft - Fields:
id,workspace_id,source_ids[],content_markdown,status,created_at,updated_at - Lifecycle:
draft -> review -> published | archived
For APIs, include request/response examples. AI performs better when data shape is concrete.
6. Add an "Implementation Notes for AI" Section
This section massively improves output quality.
Include:
- preferred libraries/framework patterns
- architecture constraints
- forbidden shortcuts
- file organization expectations
- testing requirements
Example:
- Use server actions for write operations.
- Use Zod for runtime validation at boundaries.
- Keep external provider calls behind service modules.
- Avoid introducing new global state libraries.
- Include unit tests for transformation logic and integration tests for publish flow.
You are not over-prescribing. You are preventing expensive drift.
7. Define Evaluation and Rollout Plan
A spec should answer "How do we know this worked?"
Include:
- experiment or release plan
- instrumentation events
- rollout strategy
- rollback conditions
Example:
- Rollout to 10% of users for one week.
- Track draft completion rate, publish rate, correction count.
- Roll back if publish errors exceed 1.5% or time-to-draft increases above baseline.
This keeps implementation anchored to outcomes, not vibes.
8. The Practical Template
Here is a condensed template you can reuse:
Product Spec Template
- Problem statement
- User and business outcomes
- Scope and non-goals
- User stories + acceptance criteria
- UX states and edge cases
- Data model and API contract
- Implementation notes for AI
- Test strategy
- Rollout + instrumentation
- Open questions
If a section is empty, your implementation risk goes up.
9. Common Spec Anti-Patterns
"Make it modern"
Ambiguous style requests produce generic output. Define visual direction and examples.
"Should be scalable"
Scalability without target load is meaningless. Add expected traffic and performance boundaries.
"Needs good UX"
Define key tasks, completion speed targets, and failure behavior.
"Use best practices"
List specific best practices relevant to your stack.
AI is literal. Precision wins.
Final Thought
AI coding tools are force multipliers for clear thinkers. They are not substitutes for product clarity.
When your spec is sharp:
- implementation quality goes up
- review cycles get shorter
- regressions decrease
- team alignment improves
Good specs are not bureaucracy. They are leverage.
If you want AI to build what you actually mean, write specs that make your intent testable.