Vibecoding Glossary: 30 Terms Every AI Builder Should Know

Vibecoding Glossary: 30 Terms Every AI Builder Should Know

By rik7 min readApril 30, 2026

Why this matters

Vibecoding has its own vocabulary. If you've ever nodded along while someone said "just use MCP and pipe it through a sub-agent" — and then quietly Googled everything — this guide is for you.

These aren't buzzwords for their own sake. Each term maps to a real tool, pattern, or decision you'll run into while building. Knowing the vocabulary means you can read docs faster, ask better questions, and actually understand what your AI assistant is doing.

If you're brand new, start with what is vibecoding first. Already coding? Skim the headers and slow down where things get fuzzy.

The setup

Terms are grouped by theme, not alphabetically. Work through them in order if you're learning from scratch, or jump to a section if you just need one definition.


Step 1: Learn the core concepts

These are the foundational ideas that everything else builds on.

Vibecoding

Building software by describing what you want in natural language and letting an AI model generate most of the code. The "vibe" part is real — you stay in a creative, directorial flow instead of getting stuck on syntax. See what is vibecoding for the full picture.

Agent

An AI that doesn't just respond to prompts — it takes actions: runs code, reads files, calls APIs, creates sub-tasks, and loops until a goal is reached. An agent has a sense of "what do I need to do next?" rather than just "what did you ask?"

Prompt

The text you send to a model to get output. Can be a question, an instruction, or a full spec. Prompt quality is the biggest lever you have on output quality.

Hallucination

When a model confidently generates something that isn't true — a fake API endpoint, a method that doesn't exist, a package with the wrong name. Not a bug, not lying: just pattern-matching gone wrong. Always verify generated code against real docs.

Spec

A written description of what you want to build, detailed enough that an AI can execute it without constant hand-holding. A good spec covers inputs, outputs, edge cases, and constraints. Writing a tight spec is one of the highest-leverage skills in vibecoding.


Step 2: Master models and context

Understanding how models process information helps you write better prompts and avoid common failure modes.

Context window

The total amount of text a model can "see" at once, measured in tokens (roughly 4 characters each). If your conversation plus codebase exceeds the window, older content gets dropped. Modern models have windows of 100k–200k tokens, but filling them degrades quality. Context management is a real skill — see context management for AI coding.

System prompt

A hidden instruction block that frames every conversation with a model. Sets the tone, rules, persona, and constraints. In Claude Code, your CLAUDE.md file acts like a project-level system prompt.

// Example: a system prompt that keeps a coding assistant on task
You are a TypeScript engineer working on a Next.js + Supabase app.
Follow the existing file structure. Never use 'any' types.
Always prefer server components over client components.

MCP (Model Context Protocol)

An open protocol that lets AI models connect to external tools and data sources — databases, APIs, file systems, browsers — through a standardized interface. Instead of copy-pasting data into your prompt, an MCP server exposes it as callable tools. Cursor, Claude Code, and other editors support MCP natively.

// mcp.json — a simple MCP server config
{
  "mcpServers": {
    "supabase": {
      "command": "npx",
      "args": ["@supabase/mcp-server", "--project-ref", "your-ref"]
    }
  }
}
MCP is not the same as an API. An API requires you to write integration code. An MCP server exposes tools the model can call directly, without you writing a single fetch.

RAG (Retrieval-Augmented Generation)

A pattern where relevant documents are fetched at query time and stuffed into the prompt before the model answers. Lets you build AI features on top of your own content without fine-tuning. Common in docs search, support bots, and codebase Q&A tools.

Embeddings

Numeric vector representations of text that capture semantic meaning. Similar text gets similar vectors. The foundation of RAG: you embed your docs, embed the user's query, and retrieve the closest matches by vector distance.

Tool use

The ability for a model to call predefined functions — search the web, run code, query a database — and incorporate the results into its response. What turns a chatbot into an agent. MCP is one way to expose tools; function calling in the API is another.


Step 3: Know your IDE features

Most of these live in Cursor or Claude Code. Knowing what each does stops you from fighting your tools.

Composer

Cursor's multi-file editing mode. You describe a change, and Composer figures out which files to touch and what to write in each. The closest thing to "pair programming" with an AI that actually reads your whole codebase.

Tab completion

The inline ghost-text that appears as you type, predicting the next line or block. Fast, low-friction, model-powered. Press Tab to accept. It's not just autocomplete — it understands the context of the file.

Rules file

A file in your project (.cursorrules in Cursor, CLAUDE.md in Claude Code) that gives the AI standing instructions: your stack, conventions, what to avoid, how to name things. Write it once, benefit from it forever. Think of it as a persistent system prompt scoped to your project.

Sub-agent

An agent spawned by a parent agent to handle a specific subtask. Parent defines the goal; sub-agent does the work and reports back. Enables parallelism and specialization. Claude Code uses sub-agents when you ask it to tackle a large feature end-to-end.

Slash command

A shorthand command typed in the chat input (e.g. /fix, /explain, /test) that triggers a specific preset behavior. In Claude Code, you can define custom slash commands in your config. Speeds up repetitive workflows.

Hook

A lifecycle callback that runs automatically at specific moments — before a commit, after a tool call, when a session starts. In Claude Code, hooks let you inject validation, logging, or side effects without modifying your prompts.

Skill

A saved, reusable prompt workflow. You define a skill once (the instructions, context, output format) and invoke it by name. Claude Code's skills are pre-built workflows the model can execute on command.

Plugin

An extension that adds new capabilities to an AI tool — new commands, new UI panels, new integrations. Cursor has a plugin ecosystem; Claude Code supports MCP-based plugins. Not the same as a browser extension.


Step 4: Pick up the stack basics

These are the tools that show up constantly in vibecoder stacks. You don't need to be an expert, but you need to know what each one does.

shadcn/ui

A component library you copy directly into your project rather than install as a dependency. Full source ownership, Tailwind-based, built for Next.js. AI tools handle it well because the code lives in your repo. See shadcn/ui for AI builders.

Vercel AI SDK

An open-source TypeScript toolkit for building AI-powered features — streaming responses, tool calling, multi-model support — inside Next.js and other frameworks. Handles the awkward plumbing between your UI and the model API.

Supabase

An open-source Firebase alternative: Postgres database, auth, storage, edge functions, and a REST/realtime API — all in one platform. The default backend for most vibecoder stacks. See Supabase for vibecoders.

RLS (Row Level Security)

A Postgres feature that enforces access rules at the database level. Every query is filtered by policies you define — e.g. "a user can only see rows where user_id = auth.uid()". In Supabase, RLS is the primary security layer. If RLS is disabled, any authenticated user can read every row. Always enable it.

Edge function

A serverless function that runs at the network edge — geographically close to the user — with near-zero cold start. Vercel Edge Functions and Supabase Edge Functions both run on the Deno runtime. Good for middleware, lightweight APIs, and real-time logic.

MDX

Markdown with embedded JSX components. Write prose like Markdown, drop in React components where needed. The standard format for docs sites and content-heavy Next.js apps. This article format could be MDX.

Webhook

An HTTP request sent automatically by one system to another when something happens — a payment completes, a repo gets a push, a user signs up. Your server listens for the request and reacts. Webhooks are how you connect external services to your app without polling.


Common mistakes: using the wrong term

A few that trip people up:

"Agent" vs "model" — The model is the AI brain (GPT-4o, Claude Sonnet). The agent is the system built around it that takes actions in the world. You can build a bad agent on a great model.

"Prompt" vs "system prompt" — A prompt is what you send each message. A system prompt is the standing context that shapes every message. Conflating them leads to confusion about why the AI behaves differently in different contexts.

"RAG" vs "fine-tuning" — RAG retrieves external info at runtime. Fine-tuning bakes new knowledge into the model weights permanently. RAG is faster, cheaper, and easier to update. Fine-tuning is for style, tone, and deeply domain-specific behavior.

"MCP" vs "API" — An API is a general contract for programmatic access. MCP is a specific protocol that lets AI models call tools without you writing integration code. Your Supabase instance has an API; an MCP server wraps it so Claude can use it directly.


What's next

You now have the vocabulary. Next step is putting it to work.

The fastest way to internalize a glossary is to use the terms in real conversations. Next time you ask an AI for help, name what you're doing — "I want to set up a RAG pipeline" or "help me write a system prompt for this agent" — and watch the output quality jump.

What are you building?

Claim your handle and publish your app for the world to see.

Claim your handle →

Related Articles