Context Management for AI Coding: Pin, Prune, Win
Why this matters
Every vibecoder hits the same wall around the third week of a project: the AI starts "forgetting" how the codebase works. It re-imports things that already exist. It uses an old API shape. It writes a component using a UI library you stopped using two days ago.
The problem isn't memory. It's context. Models don't know what they don't see, and they get confused by what they see too much of. AI coding context management is the discipline of pinning the right files, pruning the noise, and writing the long-lived rules down once. It's the single highest-leverage skill in vibecoding.
The setup
You need:
- A project with at least 20+ files (the problem doesn't show up before that).
- Either Cursor (with Project Rules /
.cursorrules) or Claude Code (withCLAUDE.md) — same patterns, different filenames. - A
.cursorignoreor.gitignore-style ignore file to keep build output and lockfiles out of context.
Step 1: Pin the 2-3 files the task touches
Before you prompt, ask: which files will the AI need to read to do this correctly? Usually it's two or three: the file you're editing, the type/schema it depends on, and one example of similar code in the codebase.
Pin those explicitly. In Cursor that's @filename. In Claude Code, just paste the paths into the prompt or use a slash command. Everything else stays out.
Add a /api/upvote route. Read these:
- @src/app/api/comment/route.ts (mirror this pattern)
- @src/lib/supabase/server.ts (use createClient)
- @supabase/migrations/011_upvotes.sql (the schema)
Three pinned files. The model now has the pattern, the helper, and the schema. That's enough.
If you find yourself pinning 6+ files for a single prompt, the slice is too big. Break it down — see the iterative vibecoding workflow.
Step 2: Write the long-lived rules once
Everything you find yourself re-pasting belongs in a rules file:
- Stack and conventions ("TypeScript everywhere, Zod for validation, server actions over client mutations")
- Hard rules ("RLS must be enabled on every table", "never expose service-role keys to the client")
- File patterns ("slugs are lowercase-hyphenated", "new routes go under /src/app/...")
- Forbidden moves ("don't add new dependencies without asking")
Drop them into .cursorrules (Cursor) or CLAUDE.md (Claude Code). The AI reads them every prompt without you spending tokens.
# CLAUDE.md
## Core rules
1. RLS MUST be enabled on every Supabase table.
2. Never expose SUPABASE_SERVICE_ROLE_KEY to the client.
3. Use server actions; avoid client mutations for economic state.
4. Slugs: lowercase-hyphenated, unique per creator.
Keep it short. Long rule files get partially ignored. 30-50 lines is the sweet spot.
Step 3: Prune what the AI shouldn't see
By default, AI tools index your whole repo. That's noise: lockfiles, build output, generated types, vendor copies. Add a .cursorignore (or rely on .gitignore):
.next/
.vercel/
node_modules/
package-lock.json
pnpm-lock.yaml
dist/
build/
coverage/
*.generated.ts
For codebases over 50k LOC, you also want to ignore generated migrations once they're stable, vendor folders, and any "old" code paths you've migrated away from. The model can't pick the wrong API if it can't see the wrong API.
Step 4: Reset the chat when context drifts
Long chats collect baggage. Decisions you reversed. Files that no longer exist. Half-finished refactors. After a feature ships — or after about 30-40 messages — start a new chat. Reload your rules file (it auto-loads), pin fresh, and prompt again.
Claude Code users: /clear does the same thing without restarting. Cursor users: New Chat is your friend.
If the AI starts confidently using an API that doesn't exist in your repo, that's almost always context drift, not a knowledge problem. Reset the chat before you spend an hour debugging a hallucination.
Common mistakes
- Pasting whole files in the prompt — Wastes tokens, dilutes attention, and the model already has them indexed. Use
@filenameor path references. - One giant
.cursorruleswith everything — 200-line rules files get partially ignored. Trim ruthlessly to the 20% that actually changes the AI's output. - Letting
node_modulesinto the index — You'll get suggestions citing internal vendor code. Always ignore. - Never resetting the chat — Context drift is real. After a big feature, start fresh.
- Trusting "@Codebase" with a 100k-line repo — Broad codebase search returns relevant chunks but also irrelevant ones. Pin specific files for precision work.
What's next
Context management is half the battle. Pair it with sharp prompt design — the prompting patterns guide covers spec-first, example-driven, and constraint-injection patterns. Then layer in a tight iterative loop so context drift gets caught the same hour it happens, not the next morning.
What are you building?
Claim your handle and publish your app for the world to see.
Claim your handle →Related Articles
Claude Code for Beginners: Building Smarter, Not Just Vibing
Ditch random coding and level up with AI-powered development. Claude Code turns your programming from guesswork to precision engineering.
Building Your First App in Hours with Lovable: A Vibe Coder's Guide
Transform your app idea into reality in hours, not months. Discover how Lovable is revolutionizing software creation for founders.
Crafting the Perfect PRD: An AI Builder's Guide to Precise Product Requirements
Master the art of PRD creation with expert insights that bridge visionary ideas and AI development. Navigate the essential roadmap for turning concepts into reality.