← Back to all posts

Your AI Code Quality Problem Is Actually a Standards Problem

March 30, 2026 · 6 min read

Every week I see posts like these:

“AI wrote 70,000 lines of code and I had to refactor 70% of it.” “Developers now have to maintain AI-generated slop.”

I have one response: that’s not an AI problem. That’s a you problem.

I have a production application with 360,000 lines of generated code — C#, JavaScript, CSS, CSHTML. I’ve personally touched maybe 1% of it. The architecture is consistent. The patterns are predictable. Code reviews are fast because there’s nothing surprising to find.

The difference isn’t the model I’m using. The difference is the structure I built around it.

The Real Diagnosis

When AI generates inconsistent code, there are two possible explanations:

  1. The AI is broken.
  2. You haven’t told it what good looks like.

It’s almost always the second one.

If your codebase has ten different styles, ten different ways of handling errors, ten different approaches to data access — you don’t have an AI problem. You have an architecture and standards problem. AI didn’t create that mess. It inherited it, amplified it, and handed it back to you.

If you’re refactoring 70% of AI-generated code, that’s a signal that the AI has no clear definition of what “done” looks like in your codebase. You’re leaving critical design decisions to the model’s defaults, then complaining when those defaults don’t match your expectations.

What You Actually Need to Do

Before you touch another AI coding tool, you need three things in place:

1. A solid instructions file.

Every major AI coding tool supports project-level instruction files — claude.md, copilot-instructions.md, .cursorrules. These aren’t optional. They’re how you tell the AI what kind of engineer to be on your project.

Your instructions file should point to your architecture, define your tech stack, and establish non-negotiable behaviors. Think of it as an onboarding document for a very fast, very literal new hire who will do exactly what you say and nothing else.

2. Architecture and code standards — in writing.

If your standards only exist in people’s heads, they don’t exist at all. Write them down. What does a well-structured service look like? How do you handle exceptions? What does a properly scoped repository pattern look like in your stack?

Your instructions file links to these. The AI loads them before generating anything.

3. CLEAN / SOLID / DRY — enforced, not assumed.

These aren’t just good engineering principles. They’re constraints that make AI-generated code reviewable and maintainable. Without them, every file the AI touches becomes a negotiation. With them, you know exactly what you’re getting.

Scaling It to a Team

Individual standards help you. Team standards are what prevent your codebase from drifting into chaos when six developers are all using different AI tools with different instructions.

Here’s the pattern that works: every repo gets a standard instructions file that loads your company’s shared coding standards. The standards are modular — the AI pulls in only what’s relevant to the task at hand.

Working on an API endpoint? Load the API standards. Working on data access? Load the ORM conventions. Working on cloud functions? Load the infrastructure standards.

This keeps the instructions focused and the AI from being overloaded with irrelevant context. It also means your standards stay composable — you can update one module without touching everything else.

The result at our team level has been concrete:

  • Consistent architecture across every repository, regardless of who wrote it or which tool they used
  • Faster code reviews because reviewers aren’t re-litigating design decisions that should have been pre-decided
  • Far less drift between services, which means less cognitive overhead when moving between them
  • AI that behaves like a trained team member rather than a contractor who just showed up

The Mental Model That Changes Everything

Stop thinking about AI as a code generator. Start thinking about it as a new engineer who knows how to type faster than anyone you’ve ever hired — but arrives with no knowledge of your codebase, your standards, your architecture decisions, or your team’s values.

You wouldn’t hand that engineer a Jira ticket and walk away. You’d give them context. You’d establish expectations. You’d review their first few PRs carefully and provide specific feedback.

Agentic coding works the same way. You’re not offloading the thinking — you’re doing the thinking upstream, before execution, so the execution can be fast and consistent.

The teams that are struggling with AI-generated slop are the teams that skipped the upstream work. They handed the model a vague request, got a vague result, and blamed the tool.

The teams that are shipping clean, consistent, maintainable AI-assisted code did one thing differently: they taught the AI how they build software.

That’s not a prompt engineering trick. That’s just good engineering leadership — applied one layer upstream.