Brihat Infotech Logo
Architecture

From Monolith to Microservices: The Incremental Migration Playbook

Most microservices migrations fail because teams try to do them all at once. The strangler fig pattern, done correctly, lets you migrate incrementally without a big-bang rewrite and without taking your system offline.

Brihat Team

Brihat Team

Engineering Team

3 March 202613 min read
From Monolith to Microservices: The Incremental Migration Playbook

The Rewrite Temptation

Every engineer who has worked on a large monolith has had the thought: if we could just start over with the right architecture, everything would be better. The impulse is understandable. The execution is almost always a mistake.

The big-bang rewrite has a well-documented failure rate. Joel Spolsky wrote about it in 2000. Decades of accumulated industry experience have confirmed the pattern: rewrites take three times longer than estimated, the new system has different bugs than the old system, and the business does not stop generating requirements while the rewrite is in progress. You end up chasing a moving target with a system that the original team understands less well than the one being replaced.

The incremental migration, done correctly, delivers the architectural improvements of a microservices approach without the catastrophic risk of a big-bang rewrite. This is the playbook we use.

The Strangler Fig Pattern

The strangler fig is a tree that grows around a host tree, gradually replacing it. The architectural pattern mirrors this: you grow new microservices around the existing monolith, gradually shifting traffic from monolith to services until the monolith can be retired.

The key enabler is an API gateway or proxy layer that sits in front of both the monolith and the new services. Initially, all traffic goes to the monolith. As new services are built and validated, the proxy routes specific paths to the new services. The monolith handles everything else until it handles nothing.

Phase 1: The Facade Layer

Before extracting anything, add a facade layer in front of your monolith. This is a lightweight proxy (NGINX, AWS API Gateway, or a thin Node.js/Go service) that forwards all requests to the monolith unchanged.

Adding this layer before you extract anything gives you several things: a routing layer you can control without modifying the monolith, a place to add cross-cutting concerns (authentication, rate limiting, logging) independent of the monolith, and a mechanism to do gradual traffic shifting via feature flags or percentage rollouts.

The monolith should not know the facade layer exists. All requests it receives should look identical to what they looked like before the facade was added.

Phase 2: Identify Extraction Candidates

Not all monolith components are equal extraction candidates. The best candidates have these properties:

  • Clear domain boundaries. The component has a well-defined responsibility and limited coupling to other parts of the monolith.
  • Stable interface. The API contract between this component and its consumers is stable and well-understood.
  • High change rate. Components that change frequently benefit most from independent deployment.
  • Clear operational requirements. If this component needs to scale independently of the monolith (e.g., a report generation service that is compute-intensive), extraction gives you that capability.

Common first-extraction candidates: authentication/authorisation (high coupling risk if left in monolith), notification services (email, SMS — low coupling, well-defined interface), file processing (often compute-intensive and independently scalable), and reporting/analytics (read-heavy, can run against a read replica).

Phase 3: The Extraction

For each extraction, follow this sequence:

  1. Define the service contract first. Document the API the new service will expose before writing any code. Review it with the teams that will consume it.
  2. Build the new service. It should be independently deployable and have its own data store if it owns data.
  3. Deploy alongside the monolith. The new service is live but receives no production traffic.
  4. Shadow traffic. Route production requests to both the monolith and the new service in parallel. Compare responses. Fix discrepancies in the new service. This phase runs until the new service's responses match the monolith's responses on real production traffic.
  5. Gradual rollout. Shift 1% of traffic to the new service. Monitor error rates and latency. Increase to 5%, 25%, 50%, 100%. Roll back immediately if metrics degrade.
  6. Remove the monolith code path. Once 100% of traffic goes to the new service and has been stable for a defined period, remove the corresponding code from the monolith.

Data Migration: The Hard Part

The hardest part of microservices migration is not the service code — it is the data. A monolith typically uses a single shared database. Microservices best practice says each service should own its data. Getting from the former to the latter while the system is live is genuinely difficult.

The pattern that works: dual-write. During the migration period, writes go to both the monolith's database and the new service's database. Reads come from the monolith's database (the authoritative source) until the new service's data is confirmed to be consistent. Then reads switch to the new service's database. Then the dual-write is stopped. Then the data is removed from the monolith's database.

This is more work than the alternatives, but it gives you a clean rollback option at every step and never requires taking the system offline for data migration.

When to Stop Extracting

The goal of microservices migration is not to have zero monolith code remaining. The goal is to have a system that is easier to maintain, deploy, and scale than the original. Sometimes that means extracting 5 services and leaving the rest in the monolith. Sometimes it means extracting 30 services. The right stopping point is when the benefits of further extraction — independent deployability, independent scalability, team autonomy — no longer outweigh the operational overhead of managing additional services.

A monolith that is well-structured, has clear module boundaries, and deploys cleanly is better than 30 microservices with unclear boundaries and an operations team that cannot keep up. The architecture serves the team and the business — not the other way around.

Building something?

Let's talk. Free 30-min scoping call with no commitment.

Let's Talk →
Brihat Team

Brihat Team

Engineering Team

The Brihat Infotech engineering team builds enterprise-grade digital systems — platforms, SaaS products, AI integrations, and workflow automations for clients across healthcare, fintech, edtech, and logistics.

Back to Blog
Found this useful? Share it.

Enjoyed this article?

Get more like it in your inbox. Practical engineering thinking from the Brihat team — once or twice a month. No spam, ever.