How to Write a PRD an AI Builder Can Actually Execute

How to Write a PRD an AI Builder Can Actually Execute

By rik5 min readApril 30, 2026

Why this matters

You've got the idea. You open Lovable, Cursor, or Claude and paste in your spec. The AI starts building — and it builds the wrong thing. It ignores your stack, ships a feature you explicitly didn't want, and structures the files in a way that will take days to untangle.

The spec wasn't a bad idea. The spec was just written for a human reader, not an AI executor. There's a real difference.

AI builders don't fill gaps with good judgment the way a senior engineer would. They fill gaps with the most statistically probable answer — which is often wrong for your specific context. A PRD for an AI builder has to eliminate gaps. This guide shows you exactly how.

The setup

You need a text editor and about 20 minutes. No special tooling required — the template is plain markdown. You'll use it by pasting it into your tool's first prompt (Lovable, Claude with context management), or committing it to your repo root as PRD.md for Cursor to index automatically.

The template covers four sections that most specs skip entirely: goal in one sentence, anti-goals, technical constraints, and a file/route map. Add real examples and acceptance criteria on top of that and you have something an AI can actually execute against.

Step 1: Pin the goal in one sentence

The most common PRD mistake: the goal section is three paragraphs of background. Background is for humans. An AI builder needs a single crisp statement of what this product does for whom.

Bad goal: "We are building a platform that will help teams improve collaboration and productivity by providing a centralized hub for task management, communication, and project visibility across distributed teams, enabling better workflows and accountability."

Good goal: "A lightweight task manager where solo founders can create projects, add tasks with due dates and status, and see everything due this week on one dashboard."

The formula: [product type] where [target user] can [core actions] so that [outcome].

One sentence forces prioritization. If you can't write it in one sentence, you haven't decided what you're building yet — and neither will the AI.

Read your one-sentence goal out loud. If it could describe ten different apps, it's too vague. Keep narrowing until only your app fits.

Step 2: Define anti-goals and constraints

Anti-goals are what separates a PRD for an AI builder from every other spec format. This section tells the AI what NOT to build — and it's the highest-leverage section in the whole document.

Without anti-goals, the AI will invent scope. It will add a notification system you didn't ask for, an admin panel you don't need, and a settings page that breaks the file structure you planned. Every addition is a chance for something to go wrong.

Write anti-goals as plain bullets:

  • No team/multi-user features in v1 — this is single-user only
  • No email notifications
  • No drag-and-drop reordering (add later)
  • No dark mode toggle
  • No mobile-first layout — desktop only for now

Constraints live here too. Stack, deploy target, database — spell it out explicitly:

  • Stack: Next.js App Router, TypeScript, Tailwind, Supabase (Postgres + Auth + RLS)
  • Deploy: Vercel
  • Auth: Supabase Auth email/password only — no OAuth in v1
  • No external APIs except Supabase

Constraints save you from the AI picking the most popular library instead of the one already in your stack. See specs vs vibes for why this distinction kills more builds than anything else.

Step 3: Map files, routes, and data

This is the section that turns a vague spec into an executable build plan. For any app with more than three screens, a file/route map is non-negotiable.

AI builders make architectural decisions fast and early. If you don't specify structure, they'll pick one — and it may conflict with how you planned to extend the app later. A route map anchors the architecture before a single line is written.

For a Next.js app:

## Routes
/                  → marketing landing page (static)
/login             → Supabase Auth email login
/dashboard         → tasks due this week (requires auth)
/dashboard/new     → create task form
/dashboard/[id]    → task detail + edit

## Data model
tasks
  id: uuid
  title: text (required)
  status: 'todo' | 'in_progress' | 'done'
  due_date: date (nullable)
  user_id: uuid (FK → auth.users)
  created_at: timestamptz

## Key files
app/dashboard/page.tsx       → task list, filtered by this week
app/dashboard/new/page.tsx   → task creation form
app/dashboard/[id]/page.tsx  → task detail
lib/supabase/client.ts       → browser client
lib/supabase/server.ts       → server client (RSC + route handlers)

You don't need to be exhaustive. You need to be specific about the parts that matter most. The AI fills in the rest — and with this map as ground truth, it fills in correctly.

Step 4: Add examples and acceptance criteria

Examples are concrete. Acceptance criteria are checkable. Both of these tell the AI builder something abstract requirements never can: what "done" actually looks like.

Example interactions — describe the exact user flow for the two or three most important features:

User opens /dashboard. Sees a list of tasks with due_date ≤ 7 days from today, sorted by due_date ASC. Each row shows: title, status badge (color-coded), due date. Clicking a row navigates to /dashboard/[id].

Acceptance criteria — one line per checkable condition:

  • [ ] Unauthenticated users hitting /dashboard are redirected to /login
  • [ ] Tasks without a due_date do not appear on the dashboard
  • [ ] Creating a task with no title shows a validation error inline, no toast
  • [ ] RLS policy ensures users can only read/write their own tasks

This combination — example + criteria — is what Claude, Lovable, and Cursor all use to verify their own output. Without it, they generate something plausible. With it, they generate something correct.

For more on how to feed this context effectively, see prompting patterns for code.


Here's the full copy-paste template:

# PRD: [App Name]

## Goal
[One sentence: product type + target user + core actions + outcome]

## Target user
[One sentence: who this is for, their context, their level of technical comfort]

## Anti-goals (do NOT build these in v1)
- [Feature or scope to exclude]
- [Feature or scope to exclude]

## Constraints
- Stack: [e.g. Next.js App Router, TypeScript, Tailwind, Supabase]
- Deploy: [e.g. Vercel]
- Auth: [e.g. Supabase Auth, email/password only]
- [Any other hard constraints]

## Routes
/         → [description]
/route    → [description, auth requirement]

## Data model
table_name
  field: type (notes)

## Key files
path/to/file.tsx   → [what it does]

## Example interaction
[Walk through the most important user flow in plain English, step by step]

## Acceptance criteria
- [ ] [Checkable condition]
- [ ] [Checkable condition]
- [ ] [Checkable condition]

Common mistakes

Writing the goal for a pitch deck, not a build. Goals with words like "platform," "hub," and "ecosystem" signal that scope hasn't been decided yet. The AI will interpret broadly — and build broadly.

Skipping anti-goals because it feels negative. Anti-goals aren't pessimism. They're scope control. Every feature you don't exclude is a feature the AI might build. Lovable in particular will happily add a settings page, user roles, and email notifications if you leave the door open.

Listing the data model but skipping the route map. Data and routes are equally load-bearing for an AI builder. A schema without routes leaves the navigation architecture undefined. The AI will guess — and guesses compound across every file it touches.

Using acceptance criteria that check opinions, not facts. "Clean UI" is not checkable. "No more than three clicks to create a task" is. Write criteria that could pass or fail a CI check, even if you're not running one.

Pasting the PRD once and never updating it. Your PRD is a living context file. When scope changes, update the file and re-paste it. Tools like Cursor index your repo — if PRD.md is stale, the AI is operating on outdated ground truth. Check out iterative vibecoding workflow for how to handle evolving specs across a build session.

What's next

Once you have a working PRD format, the next bottleneck shifts to prompt quality during the build itself. Prompting patterns for code covers the techniques that work inside Cursor and Claude once your spec is locked.

If you're using Lovable specifically, the visual editor loop changes how you iterate once the scaffold is up — Lovable 2026 workflow walks through the full build cycle from first prompt to custom domain.

What are you building?

Claim your handle and publish your app for the world to see.

Claim your handle →

Related Articles

Prompting Patterns for Code That Actually Ships

Prompting Patterns for Code That Actually Ships

Vague prompts produce vague code. These five structured patterns — spec-first, constraint injection, example-driven, file-map, and retry-with-diff — are what separates the builders who ship from the ones who spend three hours cleaning up hallucinated diffs.

6 min readApr 30, 2026