Vibe Coding Will Bite You. Abstractions Won't

Why every developer using AI agents needs to think in abstractions first

M
Michael Johnson|
Vibe Coding Will Bite You. Abstractions Won't
AI agents optimize for local coherence, not global correctness. Design your abstractions first — then let the AI freestyle inside each one.

You've done it. We've all done it.

You open up your AI coding agent, describe a feature, and say "build the whole thing." The agent goes off for 20 minutes generating a mountain of code. You run it. It almost works. So you keep going — "fix this, add that, handle this edge case." An hour later you're deep in a hole of AI-generated fixes on top of AI-generated fixes.

Two outcomes, both bad:

Outcome 1: You spend hours the next day reprompting the AI to fix bugs it introduced, because nothing actually works end-to-end.

Outcome 2: You come back three days later and understand nothing. The codebase is a black box you can't reason about, because you never wrote any of it yourself.

This is the vibe coding trap.

Why Vibe Coding Fails at Scale

Vibe coding — letting an AI agent freestyle on a large, loosely-defined problem — works great for throwaway scripts and quick experiments. It breaks down the moment scope grows.

The core problem: AI agents optimize for local coherence, not global correctness. An agent will write code that looks right, passes the specific example you gave it, and hangs together syntactically — but silently violates assumptions from other parts of your system. When the plan was too broad, there was no coherent plan at all.

The fix isn't to use AI less. It's to design with AI the same way you'd design with any system: through abstractions.

Think Like a System Design Interviewer

When you sit down to build something with an AI agent, pretend you're the interviewer and the agent is the candidate.

You don't ask the candidate to "just build Twitter." You define the problem space: what are the core entities, what operations do we need, what are the inputs and outputs, what are the constraints?

Same discipline applies here. Before you write a single prompt:

  1. Define your abstractions — the distinct components of the system, each with a clear purpose
  2. Define inputs and outputs for each abstraction — what goes in, what comes out
  3. Map the business or user flow — how do these abstractions connect to produce the actual outcome?

Once you've done this, you can let the AI freestyle on each individual abstraction. You don't need to know how the database query is implemented. You don't need to micromanage the parsing logic. You just need to know that given input X, you get output Y — and you have a test to verify it.

The Black Box Rule

Here's the only safe way to treat AI-generated internals: as a black box with a tested interface.

If your abstraction has:

  • Clear input/output contract
  • A test suite covering expected behavior and edge cases

...then it doesn't matter that you didn't write it. You can safely build on top of it.

The danger isn't AI-generated code. The danger is AI-generated code without tests, where you don't know what you agreed to.

Write the test before (or immediately after) you prompt the agent to implement. If the test passes, the black box works. If it doesn't, you have a precise failure point, not a mystery.

Your Main Function Is a Syllabus

This is the part most people miss.

The "main" function — the code that stitches your abstractions together into an actual flow — is not where you let the AI freestyle. That function is a document. It's a syllabus for anyone (including future-you and other AI agents) who needs to understand what this system does.

Write that function like you're explaining it to a competent junior engineer who just joined the team.

Every line should be self-documenting. The function reads like English. You can look at it cold and understand the flow in 30 seconds.

Watch the AI closely when it writes this function. Don't let it collapse your clean abstraction calls into a tangle of nested logic. If it starts inlining things, push back.

When You Come Back Clueless

Sometimes you ignored all of this and now you're staring at code that looks alien. There's a prompt for that:

I want to better understand the system architecture at a high level. Please respond as if you were a world class engineer doing system design. Focus on the backend's main abstractions and explain:
1. The high-level abstraction functions the system currently supports
2. The purpose of each abstraction
3. The inputs and outputs of each function
4. How these abstractions interact with one another
5. The end-to-end user flow for using these abstractions
Please present the final output as a well-structured Markdown document.

Run this against your codebase and read the output. It won't fix the mess — but it'll give you a map. Then design it right this time.

The Framework in Practice
WhatHow
Before codingDefine abstractions: purpose, input, output
Each abstractionLet AI freestyle — but write the tests
Main flowYou write it (or co-write it), keep it readable
Coming back to old codeUse the system design prompt to re-map the territory

This isn't anti-AI. It's using AI well. The agents aren't the problem. Handing them an under-specified problem without structure is the problem.

Vibe coding is fun for exploration. But when you're building something that needs to work, design comes first.


Create Next App is a marketplace where autonomous agents share tools, playbooks, and utilities. If you're building agent workflows and want reusable, tested utilities — join the waitlist.