Good Code Isn't Taste. It's Measurable.
5 min read
There’s a common thing said about code quality in AI circles: “taste is very hard for AI to grasp.” The implication is that knowing what good code looks like is some kind of aesthetic faculty. An instinct. Something you develop over years until you can feel it in your bones.
This framing is wrong. And it’s causing real damage.
Good code isn’t taste. It never was. Good code is measurable.
What Good Code Actually Looks Like
Robert C. Martin (Uncle Bob, the author of Clean Code, the person who defined what clean software looks like for an entire generation of engineers) posted something recently that stopped me cold.
He said he doesn’t review code written by agents. He measures it. Test coverage. Dependency structure. Cyclomatic complexity. Module sizes. Mutation testing.
“Much can be inferred about the quality of the code from those metrics,” he wrote. “The code itself I leave to the AI.”
Read that again. The godfather of clean code has stepped back from reviewing code and moved to measuring it. That’s not because he’s given up on quality. It’s because quality was always measurable. We just used to verify it manually, intuitively, through the labor of reading every line.
Cyclomatic complexity is a number. Test coverage is a percentage. Dependency structure is a graph. Module sizes are calculable. These aren’t aesthetic judgments. They’re outputs of analysis.
SOLID isn’t taste either. Single responsibility is a structural property you can verify. Open/closed, Liskov substitution, interface segregation, and dependency inversion: these are rules, not vibes. DRY is a measurable property of a codebase: does this logic exist in more than one place?
When we call good code “taste,” we obscure the fact that it has always been a set of learnable, teachable, verifiable standards. We make it sound like a gift some people have and others don’t, rather than a discipline anyone can acquire.
What This Means for AI
Once you stop calling code quality “taste,” the argument that “AI can’t have taste” stops being a problem.
AI absolutely can meet measurable standards. Cyclomatic complexity thresholds. Test coverage requirements. SOLID compliance. Module size limits. You put those standards in the instructions file. You run the metrics. You verify the output.
What AI can’t do is meet a standard you haven’t defined. If your quality bar is “it feels right,” you’ll never be able to communicate that to an AI, and you’ll never be able to measure whether it’s been met. That’s not an AI limitation. That’s a you limitation.
The teams shipping AI code with no quality standards aren’t running into a taste problem. They’re running into a standards problem. Define the metrics. Enforce them. The AI will meet them.
What This Means for Junior Developers
The junior developer who ships AI-generated code without reviewing it isn’t failing to develop taste. They’re failing to develop something much more specific: the ability to apply quality metrics.
Here’s what that review process is supposed to build. You look at the code. You ask whether the cyclomatic complexity is reasonable. You check whether the single responsibility principle is being respected. You notice when a module is doing too much. You develop the habit of measuring, not just feeling.
That’s what code review was always for. Not aesthetic appreciation. Pattern recognition against a standard.
The developer using AI deliberately (reading the output, measuring it, questioning the patterns) is building that capability faster than any previous generation could. They’re seeing more code, more patterns, more decisions in a day than a developer in 2010 saw in a month.
The developer shipping AI output without reviewing it is building nothing. They’re outsourcing not just the typing but the thinking. And unlike the typing, the thinking is the part you need.
The Education Thread
This connects to something bigger than software development. The US education system (not just CS programs, but the whole system) has largely moved from teaching how to think to teaching what to memorize.
Memorization produces people who know facts. Reasoning produces people who can analyze, measure, and decide.
AI makes memorized facts worthless. You don’t need to memorize syntax. You don’t need to memorize API signatures. What you need is the ability to evaluate output, reason about trade-offs, and apply standards. That’s the thinking side.
The developers who thrive in an AI-first world aren’t the ones who memorized the most patterns. They’re the ones who can measure what the AI produces and make judgment calls about whether it’s right.
The Discipline, Not the Gift
Call it what it is. Code quality is a discipline, not a gift. It’s measurable, teachable, and learnable.
That means junior developers can acquire it deliberately, through critical engagement with AI output rather than years of passive exposure to someone else’s judgment.
That means AI can be held to it, through explicit standards and automated metrics rather than vibes and code reviewer intuition.
That means engineering leaders have a responsibility to define it. Not “write clean code.” Not “use your judgment.” Write it down. Set the thresholds. Define what good looks like in terms a human or an AI can verify.
Uncle Bob didn’t stop caring about code quality. He operationalized it. That’s the move.
Stop calling it taste. Define the standard. Measure the output. Hold everyone to it, human and AI alike.
Coach's Playbook
AI workflows, team systems, and engineering leadership. Practical. Actionable. Weekly. Get it in your inbox — free.
Subscribe to Coach's Playbook →