Vibe Coding Is Not a Strategy
March 30, 2026 · 7 min read
Everyone’s talking about “vibe coding” — the practice of handing a prompt to an AI agent and shipping whatever comes out.
I’m not doing that. And I’d argue you shouldn’t be either.
What I’m doing looks similar on the surface — I’m using AI agents to generate code every day, and I’m not typing most of it. But the underlying practice is fundamentally different, and that difference is what separates predictable, maintainable software from a pile of plausible-looking output.
The Problem with Vibe Coding
Vibe coding treats the AI as the engineer. You describe what you want, the AI produces code, you review it enough to feel okay about it, and you move on.
The problem isn’t that AI generates bad code. The problem is that you’ve abdicated the decisions that matter most — architecture, trade-offs, constraints, standards — and pushed them down to a layer that has no context for your system, your team, or your users.
When those decisions are wrong (and without guidance, they often are), you don’t find out until you’re three features deep and everything is inconsistent. You didn’t have a vibe coding problem. You had a leadership vacuum, and the AI filled it with defaults.
What Intentional AI Pair Programming Actually Looks Like
Think about how effective human pair programming works.
One person stays focused on architecture, intent, and constraints — they’re asking “what are we actually building, and does this approach hold up?” The other handles the mechanics of implementation — syntax, patterns, the keyboard. Both continuously review, challenge, and refine the result.
That’s the model I apply when working with AI agents. The human owns intent, architecture, and judgment. The AI handles execution and speed.
In practice, that means I’m responsible for:
- System design and structure before the agent touches a file
- Coding standards and architectural constraints — defined in writing, loaded by the agent at the start of every session
- Trade-off decisions: when to abstract, when to be concrete, when “good enough” is actually good enough
- Refactoring signals — recognizing when the codebase is drifting and steering it back
- Knowing when to stop, reassess, and re-brief before continuing
The agent’s role is execution, pattern recall, and speed.
You think. The AI types. You both review. Repeat.
This isn’t prompting and hoping. It’s not outsourcing thinking to a model. And it’s definitely not shipping whatever the AI happens to generate. It’s an intentional collaboration between an experienced engineer and a very fast, very capable implementation partner.
Why This Changes What Agile Means in Practice
One of the underappreciated effects of working this way is what it does to the inspect-and-adapt loop.
Agile was never just about shipping faster. It was about shortening the distance between learning and action. The idea is simple: the sooner you can see what you’ve built and course-correct, the less waste you accumulate.
The problem has always been latency. By the time a design issue surfaces — in a code review, in a sprint retrospective, in a bug report — you’ve already built on top of it. Refactoring becomes a project. “We’ll clean it up later” becomes a phrase that haunts your backlog.
AI-assisted development collapses that latency. I periodically ask the agent to inspect the codebase for refactoring opportunities as part of the same development loop:
- Where logic has started to duplicate
- Where responsibilities want to be extracted into shared libraries
- Where early decisions no longer reflect how the system has actually evolved
Instead of deferring that work to a future sprint — or accepting “we’ll clean it up later” as inevitable — the refactoring happens now. The inspect-and-adapt cycle that Agile describes at the sprint level becomes a continuous, near-real-time practice.
The result is a codebase that’s cleaner than you’d expect at this stage — not because we avoided change, but because we embraced it early and often.
The Productivity Gains Are Real, But They Come From Discipline
Over several weeks of building a production application this way, my development velocity has increased significantly. But the gains didn’t come from any AI trick.
They came from applying the same discipline I expect from high-performing engineering teams:
- Scoping work intentionally instead of issuing broad, vague requests
- Reviewing output critically and providing specific, actionable feedback
- Maintaining clear architecture and coding standards — and refining them as the system evolved
- Requiring the agent to ask for clarification before execution, rather than correcting misunderstandings after the fact
- Supplying concrete examples when outcomes needed to be precise
These are the same guardrails we put in place when onboarding junior engineers. Clear intent. Explicit standards. Tight feedback loops. Design decisions made at the right level.
AI agents are powerful, but they don’t reduce the need for engineering leadership. If anything, they make its absence immediately visible. A vague request to a senior engineer produces a conversation. A vague request to an AI agent produces code — wrong code, confidently delivered.
Craft vs. Luck
There’s a version of AI-assisted development that produces output. You’ll recognize it: things mostly work, code reviews feel uncomfortable, nobody’s quite sure how the pieces fit together, and tech debt accumulates faster than you can measure it.
And there’s a version that produces craft — software that’s consistent, maintainable, and built with intent. The difference isn’t the model you’re using.
The difference is whether an experienced engineer is doing the thinking upstream, before execution, with enough clarity and discipline that the execution can be fast and right.
Vibe coding is improvisation. What I’m describing is intentional collaboration.
Once you frame it that way, the outcomes look a lot more like craftsmanship and a lot less like luck.