The History of Vibecoding: From Copilot to Agents (2021-2026)

The History of Vibecoding: From Copilot to Agents (2021-2026)

By rik7 min readApril 30, 2026

Why this matters

Most people treat vibecoding like it appeared overnight. One Karpathy tweet and suddenly everyone's building apps by chatting at a browser. But the shift took five years of compounding bets — model improvements, IDE rewrites, and product launches that each quietly changed what "writing code" meant.

If you're building in this era, you should understand the ground it stands on.

The setup

Five eras, roughly one per year:

  • 2021GitHub Copilot technical preview (June 29). Autocomplete gets smart.
  • 2022ChatGPT ships (November 30). Devs discover the chat-first debugging loop.
  • 2023Cursor launches (March). First AI-native IDE. VS Code + LLM, native.
  • 2024Claude 3.5 Sonnet + Artifacts. Bolt.new ships. Text-to-app goes mainstream.
  • Feb 2025 — Karpathy coins "vibe coding." Claude Code research preview. The term lands.
  • 2025-2026 — Agent mode matures. Cursor hits $30B valuation. Claude Code goes GA.

Step 1: Trace the roots — 2021-2022: autocomplete eats the editor

On June 29, 2021, GitHub announced Copilot as a technical preview. It was built on OpenAI Codex, a fine-tuned descendant of GPT-3. The pitch was simple: it completes your code the way Gmail completes your sentences.

The reaction in developer circles was split. Half thought it was a parlor trick. The other half quietly started leaving it on and noticed their Tab key was doing more work.

Copilot went generally available in June 2022. Then OpenAI shipped ChatGPT on November 30, 2022 — the real unlock. Not because ChatGPT wrote better code, but because it gave developers a conversation loop. Paste an error, get an explanation, paste a fix request, get a function. No IDE integration. Just a browser tab.

By early 2023, "ask ChatGPT" was a standard debugging step. The autocomplete era was already giving way to something more powerful.

Step 2: Enter the AI-native IDE — 2023: Cursor changes the frame

Cursor launched in March 2023, built by Anysphere. The insight: VS Code was already where developers lived, so fork it and make the AI native instead of bolted on.

Cursor's differentiator was codebase context. Instead of completing the line you were typing, it could reference your entire repo. Highlight a function, say "refactor this to use the new API" — it understood what files were affected.

For the first time, the workflow looked like collaboration rather than autocomplete. You described intent; the tool handled implementation.

Cursor's real innovation wasn't AI quality — it was the context model. Giving the model your whole codebase instead of just the current file changed what questions you could ask. See the Cursor 2026 features guide for how far this has evolved.

Claude 2 shipped in July 2023. Anthropic gained a reputation for longer context windows and more reliable instruction-following. The model race was on.

Step 3: The artifact era — 2024: Claude 3.5 Sonnet and text-to-app

June 2024: Anthropic shipped Claude 3.5 Sonnet with "Artifacts" — a sidebar that rendered live React components and runnable code directly in chat. For the first time, you could go from prompt to working UI without leaving the conversation.

Designers, PMs, and founders started building throwaway prototypes. The prompt-to-interface loop got tight enough to feel real.

Then StackBlitz launched Bolt.new — a browser-based environment where you described an app in plain English and it scaffolded, wired, and deployed it. v0 by Vercel handled component generation. Lovable handled full-stack apps with auth and databases baked in. Text-to-app was a product category now.

Here's a prompt that would have been meaningless in 2021 but worked by late 2024:

Build a full-stack todo app with:
- Next.js + Tailwind CSS frontend
- Supabase backend with auth
- Row-level security on the todos table
- Deploy-ready to Vercel

Don't explain the code, just build it.

In 2021, Copilot would have completed a line. By 2024, tools like Bolt.new and v0 would scaffold the entire thing. See the v0 by Vercel patterns guide for how far this has evolved.

Step 4: The term arrives, the agents mature — 2025-2026

On February 2, 2025, Andrej Karpathy posted a tweet that gave the whole movement a name. "Vibe coding" — programming by feel, letting the AI handle implementation while you steer by intention. The term spread instantly because it named something developers had already been doing for months but couldn't articulate.

Two weeks later, Claude Code launched as a research preview — a terminal-native agentic tool that could navigate repos, run tests, read error output, and iterate without hand-holding. It went GA in May 2025 alongside Claude 4.

The distinction from Cursor was philosophical: Cursor helps you write code faster. Claude Code writes the code and checks its own work. You give it a task; it opens files, runs commands, catches failures, and loops until something passes.

By late 2025, Cursor had shipped proper agent mode and hit a $30 billion valuation. Google shipped Antigravity. Windsurf matured. The question shifted from "should I use an AI coding tool" to "which agent model fits my workflow."

For a deeper look at the current landscape, the vibecoder mindset guide covers how to work with autonomous agents rather than fight them.

Common mistakes when reading this history

Mistake 1: Dating vibecoding to the Karpathy tweet. The tweet named a practice that already existed. Developers were building non-trivially with LLMs throughout 2023-2024. The tweet is a cultural timestamp, not a technical origin.

Mistake 2: Treating each era as a replacement. Autocomplete and agents coexist. Most developers in 2026 use multiple tools — Copilot for flow-state typing, Cursor for refactors, Claude Code for greenfield work.

Mistake 3: Thinking the tools define the skill. The bottleneck in every era has been the same: knowing what to ask for. In 2021 that meant writing better comments so Copilot had context. In 2026 it means writing specs clear enough that an agent can execute without stalling.

Mistake 4: Ignoring model improvements as background noise. Every jump — 2022 ChatGPT, 2024 Claude 3.5 Sonnet, 2025 Claude 4 — was driven by model capability, not just product design. The IDE innovations were mostly about surfacing what the models could already do.

What's next

By mid-2026, the frontier is agents working in parallel — multiple instances tackling different parts of a codebase, with orchestration layers managing handoffs. The human role is shifting from writing code to reviewing diffs, writing specs, and catching semantic errors that pass tests but fail users.

Five years of compression: what took a senior engineer a week in 2021 takes an afternoon in 2026. That curve is not flattening.

If you're just getting started, what is vibecoding covers the fundamentals — and builds directly on this timeline.

What are you building?

Claim your handle and publish your app for the world to see.

Claim your handle →

Related Articles