The PRD Problem
Product requirements documents are the connective tissue between what you want to build and what actually gets built. When they are good, engineering teams move fast and build the right thing. When they are bad — vague, incomplete, or contradictory — teams waste weeks building the wrong thing.
The problem is that writing good PRDs is slow. A thorough PRD for a medium-complexity feature can take a week of focused work: researching user needs, defining requirements, specifying edge cases, writing acceptance criteria, and getting stakeholder alignment. Multiply this by every feature in your roadmap and the PM becomes a bottleneck.
AI does not replace the thinking that goes into a PRD. But it dramatically accelerates the writing. The strategic decisions — what to build, for whom, and why — remain yours. The mechanical work of structuring those decisions into a comprehensive document is where AI saves hours.
The Framework: AI-Assisted PRD Writing
I use a five-phase framework that combines human decision-making with AI execution.
Phase 1: Problem Definition (Human-Led)
Before touching any AI tool, answer these questions yourself:
- What problem are we solving?
- Who experiences this problem?
- How do they currently work around it?
- Why is now the right time to solve it?
- What does success look like?
These questions cannot be outsourced to AI because they require understanding your users, your market, and your business context. Write rough, unpolished answers. The structure and polish come later.
Phase 2: Research Synthesis (AI-Assisted)
Feed your AI tool all the raw inputs:
- Customer interview notes
- Support ticket themes
- Competitive analysis
- Analytics data showing the problem's impact
- Internal Slack discussions about the feature
Ask AI to synthesize these inputs into:
- A problem statement
- User personas affected
- Key pain points ranked by frequency and severity
- Quantified impact of the problem
The AI is not deciding what matters — you already did that in Phase 1. It is organizing the evidence that supports your decisions.
Phase 3: Requirements Generation (AI-Drafted, Human-Edited)
This is where AI saves the most time. Provide your problem definition and research synthesis, then ask AI to generate:
Functional requirements:
- Core features needed to solve the problem
- User workflows and interaction patterns
- Data requirements and system integrations
- Permissions and access control
Non-functional requirements:
- Performance expectations (response times, throughput)
- Scalability considerations
- Security and compliance requirements
- Accessibility requirements
Edge cases and error states:
- What happens when things go wrong?
- How should the system handle unexpected inputs?
- What are the failure modes and recovery paths?
AI is excellent at generating comprehensive requirement lists because it draws on patterns from thousands of similar products. It will suggest requirements you would not have thought of, especially edge cases.
Your job is to edit ruthlessly. Remove requirements that do not apply. Add requirements specific to your context. Prioritize based on your understanding of what matters.
Phase 4: Acceptance Criteria (AI-Generated)
For each requirement, AI can generate acceptance criteria in Given/When/Then format:
- Given a user on the free plan, when they try to access this feature, then they see an upgrade prompt
- Given a user with admin permissions, when they configure the feature, then the settings persist across sessions
- Given the system is processing more than the expected number of concurrent requests, when a new request arrives, then it is queued rather than rejected
Acceptance criteria are especially well-suited for AI generation because they follow a predictable format and the main challenge is comprehensiveness — covering all the scenarios. AI is tireless at generating scenarios.
Phase 5: Review and Refinement (Human-Led)
The AI-generated PRD is a strong first draft. The review process:
- Read it as an engineer would. Is anything ambiguous? Are there gaps that would force an engineer to make assumptions?
- Read it as a designer would. Does the document give enough context for a designer to create mockups? Are the user flows clear?
- Read it as a stakeholder would. Is the rationale clear? Can someone understand why this feature matters without additional context?
- Read it as a QA engineer would. Are the acceptance criteria testable? Can someone write test cases from this document?
Each perspective reveals different gaps. Fix them before sharing the document.
Prompt Strategies That Work
The Context Dump
The more context you give AI, the better the output. Instead of asking "write a PRD for a notification system," provide:
- The product context (what your product does, who uses it)
- The problem context (why notifications matter, what is broken today)
- The constraint context (technical limitations, timeline, team size)
- The success context (what metrics will improve)
The Iterative Approach
Do not try to generate the entire PRD in one prompt. Break it into sections:
- Generate the problem statement and goals
- Review and refine
- Generate the requirements based on the refined problem statement
- Review and refine
- Generate acceptance criteria for each approved requirement
- Review and refine
Each iteration builds on the previous one, producing a more coherent document than a single-shot approach.
The Devil's Advocate
After generating your PRD, ask AI to critique it:
- "What requirements are missing from this PRD?"
- "What edge cases have I not considered?"
- "If you were an engineer, what questions would you have after reading this?"
- "What could go wrong with this feature that is not addressed in the document?"
This adversarial prompting catches gaps that the generative prompting missed.
Common Mistakes When Using AI for PRDs
Accepting AI Output Without Editing
AI-generated PRDs read well but often lack the specificity that your team needs. Generic requirements like "the system should be fast" need to be replaced with specific ones like "the API response time should be under 200 milliseconds at the 95th percentile."
Over-Specifying Implementation
AI tends to specify implementation details ("use a Redis cache" or "implement a WebSocket connection") when the PRD should focus on requirements. Strip out implementation suggestions unless they are genuine constraints.
Ignoring Organizational Context
AI does not know that your team has a complicated history with feature flags, or that your database migration process takes two weeks. Add organizational context that affects how the feature will be built.
Skipping Stakeholder Review
An AI-generated PRD that looks comprehensive can create false confidence. Always get engineering, design, and business stakeholder review before committing.
The Template
Here is the PRD structure I use, with notes on which sections AI handles best:
- Overview and Problem Statement — Human-led, AI-refined
- Goals and Success Metrics — Human-led, AI can suggest additional metrics
- User Stories and Personas — AI-drafted from research inputs
- Functional Requirements — AI-drafted, human-edited
- Non-Functional Requirements — AI-drafted, human-edited
- Edge Cases and Error Handling — AI excels here
- Acceptance Criteria — AI-generated, human-validated
- Out of Scope — Human-led (AI tends to expand scope, not constrain it)
- Dependencies and Risks — Human-led with AI assistance on risk identification
- Timeline and Milestones — Human-led
FAQ
How much time does AI actually save on PRD writing?
In my experience, AI reduces the writing time by roughly half to two-thirds. The thinking time stays the same — you still need to understand the problem and make strategic decisions. But the mechanical work of structuring, writing, and ensuring completeness is dramatically faster.
Will engineers take AI-generated PRDs seriously?
Engineers care about clarity and completeness, not how the document was written. If the PRD is specific, unambiguous, and addresses edge cases, it does not matter whether AI helped write it. If anything, AI-assisted PRDs tend to be more comprehensive than manually written ones.
Should I tell my team the PRD was AI-assisted?
Yes. Transparency builds trust. And practically, letting your team know encourages them to flag anything that feels generic or under-specified — which improves the final document.
Can AI help maintain and update existing PRDs?
Absolutely. Feed the current PRD and the proposed changes to AI and ask it to produce an updated version with tracked changes. This is especially useful for living documents that evolve throughout development.