Here’s the problem nobody’s talking about: your teams are using AI to write code, and every developer is getting different results.
Not because the AI tools are different (though they are). Because every developer has different instructions, different prompts, different expectations for what “good code” looks like. One developer’s Claude setup enforces your architecture patterns. Another developer’s Copilot has no idea those patterns exist. A third developer is using Cursor with a rules file they copied from a blog post six months ago.
You don’t have agentic coding standards. You have individual developers with individual configurations producing inconsistent output. And that inconsistency is compounding in your codebase every single day.
The Single Source of Truth Problem
Most organizations handle AI coding instructions at the individual level. Each developer configures their own tool. Maybe they share tips in Slack. Maybe someone writes a wiki page that gets stale in a week.
This is the equivalent of telling your team “follow the coding standards” without ever writing the coding standards down. It doesn’t work for humans, and it definitely doesn’t work for AI.
What you need is a single source of truth for how AI should generate code in your organization. One set of standards. Version-controlled. Reviewed through the same PR process as your code. And accessible to every AI tool your developers use.
The Architecture
The system I use with my teams has three layers:
Layer 1: The Root Document. A single markdown file that serves as the entry point for all AI-assisted development. It contains your core principles (the non-negotiables that apply to every task), an index of your detailed standards modules, and the rules for how AI agents should operate in your codebase. This file lives in a shared location that every repo references.
Layer 2: Modular Standards. Individual files for each topic area (architecture, testing, API design, database conventions, security, etc.). Each one has a clear scope and a “load when” condition so the AI only pulls it in when it’s relevant. These are the detailed, task-specific rules.
Layer 3: Tool-Specific Entry Points. This is the key to making it work across tools. Every repo has the configuration files that each AI tool expects (CLAUDE.md for Claude, copilot-instructions.md for Copilot, .cursorrules for Cursor). But these files are intentionally thin. They contain one instruction: go load the shared standards.
The result: every AI tool, for every developer, on every team, gets routed to the same standards.
Why Thin Pointers Matter
The temptation is to copy the standards into each tool’s configuration file. Don’t do this.
The moment you have copies, you have drift. Someone updates the Copilot instructions but forgets the Claude file. Another team forks the standards and makes local modifications that never get shared back. Within a month, you’ve got five versions of your “standards” and none of them match.
Thin pointers solve this. The tool-specific file says: “This project follows the Agentic Coding Standards. Load and comply with [path to standards].” Full stop. The standards live in one place. Changes happen in one place. Every tool picks up the changes automatically.
Developer Neutrality
This is a principle I enforce explicitly: standards apply equally to all developers, human and AI.
This sounds obvious, but most teams have an unspoken double standard. Human-written code goes through code review against established patterns. AI-generated code gets a quick glance and a merge because “the AI probably knows what it’s doing.”
No. AI-generated code is code. It should meet the same standards, follow the same patterns, and pass the same reviews. When your standards document says “use controller-based APIs with [ApiController], not Minimal API,” that applies whether a human typed it or an AI generated it.
Developer neutrality also means your standards don’t assume a specific tool. They describe what good code looks like, not how to configure a specific AI product. The tool-specific entry points handle the translation.
Enforcing Compliance
Standards without enforcement are suggestions. Here’s how I make them stick:
Require acknowledgment. The standards file includes an instruction: at the start of every session, the AI must acknowledge that it has loaded and will comply with the standards. If it doesn’t, the developer knows the setup is broken before any code gets written.
Require citations. When the AI makes a non-obvious decision, it must cite the specific standard it’s following. “Per architecture standards: using repository pattern with cached decorator, not direct DbContext access.” This creates a paper trail and makes code review faster.
Include it in the PR process. The PR checklist explicitly asks: “Do the changes comply with the agentic coding standards?” This isn’t just a checkbox. The reviewer should be able to see the standard citations in the AI’s work.
Formal deviation protocol. Sometimes you need to deviate from a standard. That’s fine. But undocumented deviations are violations. If you’re breaking the pattern, add an inline comment citing the standard and explaining why. This applies to humans and AI equally.
Enforce at the tool level. The standards include a hard line: “If an AI tool cannot load this document or produce contextual callouts, it must not be used for code generation in this repository.” This prevents the scenario where someone uses an unconfigured tool and generates code that ignores all your standards.
Version Control as Governance
Your standards should live in Git. Changes should go through pull requests. This gives you:
History. When did the standard change? Who changed it? Why?
Review. Standards changes get the same scrutiny as code changes. A team lead or architect reviews and approves before anything becomes official.
Rollback. If a new standard turns out to be wrong, you revert it like any other code change.
Visibility. Everyone can see the current state of the standards at any time. No more “I think the wiki said something about that.”
I keep the canonical version in a shared location (a private repo or a .github-private directory) and have each project repo reference it. This means a change to the standards propagates to every repo without individual updates.
Scaling Across Teams
When you have multiple teams, the question becomes: how do you allow team-specific customization without fragmenting the standards?
The answer is layering. The organization-wide standards define the floor. They cover the things that must be consistent everywhere (architecture patterns, security requirements, naming conventions, code quality expectations). Individual teams can add to these standards for their specific domain, but they can’t override the organizational layer.
In practice, this looks like:
- Organization standards: The shared root document and core modules
- Team additions: A team-specific standards file that adds context for their domain (e.g., “this service uses event-driven architecture, see [module]”)
- No local overrides: If a team thinks an org standard is wrong, the fix is a PR to the shared standards, not a local exception
This maintains consistency where it matters while giving teams flexibility where they need it.
Getting Started
If you’re managing multiple teams and you don’t have cross-team agentic standards yet, here’s the pragmatic path:
- Start with what you have. Collect the AI instruction files your developers are already using. Find the common patterns.
- Extract the universal rules. What applies to every team, every repo, every tool? Those are your golden rules.
- Modularize the rest. Group standards by topic. Each module should be loadable independently.
- Build the router. A root document that indexes the modules with loading conditions.
- Create thin pointers. Replace every team’s tool-specific instructions with pointers to the shared standards.
- Put it in Git. Standards changes go through PR review from this point forward.
I’ve published a reference implementation that shows the complete structure. Fork it, replace the example content with your actual standards, and deploy it across your repos.
The teams using AI without shared standards are building up inconsistency debt that will be expensive to fix later. The teams that establish this infrastructure now are building a foundation that scales.
Every developer on your team (human and AI) should be building to the same standard. The tooling exists to make that happen. The question is whether you have the discipline to set it up.