Prompting Patterns for Code That Actually Ships
Why this matters
Most AI coding sessions fail at the prompt, not the model. You type "build me a settings page" and get back a component that imports the wrong library, edits files you didn't want touched, and ignores your existing patterns. The model didn't fail — you gave it an underspecified job.
AI code prompting patterns are the fix: repeatable structures that give the model enough context to reason well — goal, scope, constraints, and an example. Once you have them, you stop rewriting generated code and start shipping it.
These patterns work with Claude, Cursor, and Claude Code. The XML tag examples are Claude-specific but the structure translates anywhere.
The setup
Before you prompt, have two things ready: a rough file map (which files are in scope) and one example of an existing pattern to match. These two inputs do more work than any prose description. For how spec quality shapes output at a higher level, the how to write a PRD for AI guide covers the same front-loading principle.
Step 1: Lead with the goal and the file map
The single most effective structural change you can make is putting your goal and scope in the first three lines. Models attend most strongly to early context. If your first sentence is "I have a Next.js app and I need to..." you've wasted your lead.
Instead, use the spec-first pattern: Goal, Constraints, Anti-goals, File map. Claude responds especially well when you wrap these in XML tags because it removes ambiguity about what's instruction versus context.
<goal>
Add a "Copy link" button to the app card component that copies the app's public URL to the clipboard.
</goal>
<constraints>
- Use the existing useToast hook for the success message
- Match the button variant used in AppCard's existing action row
- No new dependencies
</constraints>
<anti_goals>
- Do not refactor AppCard's layout
- Do not touch /app/[handle]/page.tsx
- Do not add a new utility file
</anti_goals>
<file_map>
- src/components/AppCard.tsx — primary change target
- src/hooks/useToast.ts — reference only, do not edit
- src/lib/urls.ts — getAppPublicUrl() lives here
</file_map>
The <anti_goals> block is the part most builders skip. It's also the part that prevents the majority of messy diffs — the model now knows what's out of scope before it starts planning.
<file_map> honest. If you list a file as "reference only, do not edit," Claude will respect that. If you forget to list a file that gets changed anyway, that's your map being incomplete — not the model going rogue.Step 2: Inject constraints and anti-goals
Constraints and anti-goals deserve their own pass because they're doing different work. Constraints define what the output must be. Anti-goals define what the model must not do, which is often more important.
Common constraint categories worth spelling out:
- Dependency constraints — "never add a new npm package", "only use deps already in package.json"
- File constraints — "do not touch any file in /api/", "only edit files I list"
- Pattern constraints — "use the same error handling as getServerSideProps in /dashboard/page.tsx"
- Style constraints — "no inline styles, Tailwind classes only", "no default exports"
In Cursor, these map directly to .cursorrules. In Claude Code, they belong in your CLAUDE.md for project-wide rules and in the per-task prompt for task-specific ones. Don't rely on the model to infer your constraints from the codebase — state them.
For a related breakdown of how to think about what to constrain, the specs vs vibes article covers when explicit constraints pay off versus when they get in the way.
Step 3: Anchor with examples
Few-shot examples beat descriptions every time. Instead of explaining what you want, show it. The example-driven pattern gives the model a concrete template to match — shape, naming, error handling, and all.
The key is picking the right existing component or function. Find something in your codebase that's doing the closest thing to what you need and drop it inline:
<example>
Here is an existing action button in AppCard that follows the pattern I want:
// From src/components/AppCard.tsx
<Button
variant="ghost"
size="sm"
onClick={() => handleUpvote(app.id)}
className="gap-1.5"
>
<ArrowUp className="h-3.5 w-3.5" />
{app.upvote_count}
</Button>
Build the Copy Link button using this same shape.
</example>
This tells Claude the exact variant, size, className pattern, icon import style, and handler convention — without a single sentence of description. The model generalizes from the example rather than guessing. In Claude Code, drop in the output of a similar previous step and say "do the same shape for X."
Step 4: Retry with the diff, not the whole file
When the first attempt is close but not right, most builders re-describe the entire task. That's wasteful. Use the retry-with-diff pattern instead: show exactly what changed from the output you got, and ask for a targeted fix.
<context>
Your previous output was mostly correct. Here is the specific issue:
</context>
<actual_output>
You used navigator.clipboard.writeText() without wrapping it in a try/catch.
The toast fires even when the copy fails on HTTP (non-HTTPS) origins.
</actual_output>
<expected_behavior>
Wrap the clipboard call in try/catch.
If it throws, show an error toast instead of a success toast.
No other changes.
</expected_behavior>
The <no_other_changes> framing is load-bearing. Without it, a retry can trigger a broader rewrite. In Claude Code, paste the actual /diff output directly into <actual_output> and describe only the delta you want corrected.
Common mistakes
Describing the implementation instead of the goal. "Use useState to track the copy state" tells the model how, not what. Lead with what the user experiences — the model usually knows better implementations anyway.
Listing files without marking intent. "See AppCard.tsx and useToast.ts" without specifying edit vs. reference means the model may change both. Mark the role of every file.
Skipping anti-goals. The model has no idea what's precious in your codebase. If you don't want your auth logic touched, say so explicitly.
Writing constraints as prose. Bulleted or XML-structured constraints parse better than embedded sentences. One line per constraint.
Forgetting the file map on multi-file tasks. An explicit file map gives the model a checklist. Without it, scope creep is almost guaranteed.
For what to do when you still get a bad output, see how to debug AI-generated code — it covers diff-reading and isolating the bad assumption systematically.
What's next
These five patterns — spec-first with file map, constraint injection with anti-goals, example-driven anchoring, and retry-with-diff — cover roughly 90% of the prompting work in a typical vibecoding session. The remaining 10% is workflow: how you sequence tasks, how you manage context across long sessions, and how you recover when the model drifts.
For the workflow side, the iterative vibecoding workflow guide picks up where this one leaves off. And for the context management piece — keeping Claude sharp across many files — see context management for AI coding.
What are you building?
Claim your handle and publish your app for the world to see.
Claim your handle →Related Articles
The History of Vibecoding: From Copilot to Agents (2021-2026)
From a June 2021 technical preview to autonomous agents rewriting entire codebases in 2026 — here's how five years of AI tooling created a new way to build software.
Vibecoding vs Traditional Coding: When to Use Each
Vibecoding wins on prototypes, internal tools, and MVPs. Traditional coding wins on auth, payments, and safety-critical systems. Learn the decision matrix every builder needs.
Specs vs Vibes: Knowing When to Plan and When to Just Build
Some tasks need a spec. Some need you to just start prompting. Knowing which is which is the meta-skill of 2026 vibecoding.