← Back to all posts

The Three-Agent Playbook for Legacy Code Modernization

April 20, 2026 · 7 min read

Every engineering team I’ve worked with has the same legacy code conversation.

“We need to modernize the codebase.” Everyone agrees. “It’s going to take forever and we’ll probably break something critical.” Everyone also agrees. So the project gets scoped, added to the roadmap, deprioritized for more urgent work, and pushed to next quarter. And next quarter. And the quarter after that.

The problem isn’t willingness. It isn’t even time, though that’s what everyone blames. The real problem is risk. Specifically, one category of risk that makes legacy modernization genuinely dangerous: the business rules buried in the code.

The Real Cost of Legacy Modernization

Old code is dense with knowledge. Some of it is architectural (how the system is structured). Some of it is technical (what database schema it expects, what APIs it calls). But a significant portion is business logic that was never documented anywhere else. Rounding rules that match a contract from 2009. Edge case handling for a specific client’s workflow. Validation rules that reflect a regulatory requirement nobody remembers. Tax calculation logic that a developer reverse-engineered from a spreadsheet twelve years ago.

When you modernize by rewriting, this is what you’re most likely to lose. Not because your developers are careless. Because the knowledge is scattered across thousands of lines of code, embedded in method names that made sense to someone in 2011, commented out in blocks nobody has touched in years. There’s no document that says “here are all the business rules.” There’s just the code.

The traditional options are both bad. Do a big-bang rewrite and risk losing critical rules in the transition. Do incremental refactoring and watch the project stall because there’s always something more urgent.

There’s a third option. And it requires rethinking who (or what) does the extraction work.

The Three-Agent Pipeline

The approach is a sequential pipeline with three specialized AI agents, each handling a distinct phase of the modernization. The key insight is that the work has to happen in order: document before you design, design before you build.

Agent 1: The Business Analyst

The first agent’s job is to read the existing codebase as a business analyst, not as a developer. It’s not looking for how the code works. It’s looking for what the code knows.

This means scanning for:

  • Business rules: Conditional logic that reflects domain decisions (“if the account type is X and the balance exceeds Y…”), validation rules, calculation logic, edge case handling
  • Domain vocabulary: What entities exist, what they’re called, what relationships the system models
  • Integration contracts: What external systems the code talks to, what data it sends and receives, what assumptions it makes about those interactions
  • Implicit policies: Behaviors that are enforced by the code but never documented anywhere (rate limits, access rules, data retention patterns)

The output is structured documentation that captures this knowledge in a system of record. For most teams, that’s Confluence. It could be any documentation platform. The point is that the business rules are now outside the code, in human-readable form, reviewable by stakeholders who can verify the accuracy of what was extracted.

This step alone is valuable regardless of what comes next. Teams that have done this have discovered business rules that nobody on the current team knew existed. They’ve found contradictions (two parts of the codebase implementing the same rule differently). They’ve surfaced logic that was silently wrong for years.

Agent 2: The Product Owner

The second agent takes the BA’s documentation and turns it into structured work items.

This is where the pipeline connects to the requirements workflow I’ve written about separately (the feature-writer to story-splitter to story-writer to prefinement chain). The PO agent reads the extracted business documentation and creates features and stories that represent the modernized implementation.

Each story captures:

  • What the modernized system needs to do (derived from the extracted business rule)
  • Acceptance criteria that can be verified against the original behavior
  • The context needed for a developer (or developer agent) to implement it correctly

The work items live in your existing tracking system (Jira, Azure DevOps, or wherever your team works). They’re linked to the source documentation. They follow the same quality standards as any other story your team would produce.

This step transforms raw extracted knowledge into an executable plan. The modernization isn’t a vague directive anymore. It’s a backlog of well-scoped, well-defined stories that any developer on the team can pick up.

Agent 3: The Developer

The third agent reads the work items and implements them according to your current architecture and coding standards.

This agent operates the same way any AI developer agent would in your environment: loading your CLAUDE.md, following your architecture standards, using the patterns established in your modern codebase. The difference is the input. Instead of a vague feature request or a verbal description, it has a precisely written story with acceptance criteria derived directly from the legacy system’s behavior.

The output is modern code that does what the legacy system does, written to your current standards, with no knowledge of how the original was implemented. It’s not a translation. It’s a reimplementation from specification.

Why This Works When Traditional Approaches Don’t

The pipeline solves the core failure mode of legacy modernization: knowledge loss.

Traditional rewrites lose knowledge because there’s no systematic extraction step. Someone reads the old code, builds a mental model, starts writing the new code, and somewhere in that translation, things get dropped. Not maliciously. Because human working memory has limits, because the old code is confusing, because there are always time pressures.

This pipeline externalizes the knowledge before anything gets rewritten. The BA agent’s documentation is the canonical record of what the system does. The PO agent’s stories operationalize that record. The developer agent implements against the stories. At each step, the knowledge is preserved in an explicit artifact.

It also solves the risk problem. Because the stories have acceptance criteria derived from the original system’s behavior, you have a clear definition of success. When the modernized implementation passes the acceptance criteria, it replicates the original behavior. When it doesn’t, you know exactly what’s wrong before anything goes to production.

Running Modernization Alongside Daily Development

Here’s the operational advantage that makes this practical for teams that can’t stop regular development to run a modernization project: you can run multiple Claude Code instances simultaneously.

One instance handles the sprint work. Another runs the modernization pipeline in parallel. They’re operating independently, in separate contexts, without interfering with each other.

This changes the economics of modernization entirely. It’s no longer a choice between maintaining velocity on current work and paying down the legacy debt. Both happen in parallel. The modernization pipeline runs as a background workstream, generating stories, implementing modules, making progress while the team stays focused on delivery.

For teams with a large legacy codebase, this is significant. The modernization doesn’t have to be a “big project.” It can be a continuous background process that gradually replaces legacy modules with modern implementations, one well-scoped story at a time.

Getting Started

The pipeline uses tools that already exist if your team has adopted an agentic development workflow:

The sequencing is what matters. BA first. PO second. Developer third. Don’t skip the extraction step, and don’t merge the roles. The BA’s job is to understand the legacy system without judgment. The PO’s job is to translate that understanding into modern requirements. The developer’s job is to implement without ever looking at the old code.

That separation is what keeps the legacy system’s bad patterns from leaking into the new implementation.

The Business Case

For engineering leaders making the case to modernize, the pipeline also provides something traditional approaches can’t: a documented audit trail.

Every business rule that gets extracted is documented. Every story that implements a rule is traceable back to the documentation. When a stakeholder asks “how do we know the new system handles the tax calculation the same way the old one did?”, there’s an answer. There’s a BA document that describes the rule, a story with acceptance criteria that codifies it, and a test that verifies the implementation.

That traceability makes the modernization auditable in a way that “we rewrote it and it seems to work” never is.

Legacy code accumulates because the cost of touching it feels too high. This pipeline changes that cost structure. The risk of knowledge loss is managed by the extraction step. The risk of scope sprawl is managed by the story structure. The risk of blocking daily development is managed by parallel execution.

The codebase that’s been on the modernization roadmap for three years doesn’t have to stay there.

Coach's Playbook

AI workflows, team systems, and engineering leadership. Practical. Actionable. Weekly. Get it in your inbox — free.

Subscribe to Coach's Playbook →