Microservices: A Strategy for Incrementally Upgrading Legacy PHP Frameworks
In the African savanna, when drought strikes, elephant herds rely on their matriarchs’ memory of distant water sources—knowledge accumulated over decades. But what happens when the entire landscape changes? When old waterholes dry up permanently and new ones appear in unfamiliar territories? The herd cannot pack up and move overnight; they must gradually explore, test, and adapt their routes while still surviving each day.
Your legacy PHP application faces a similar challenge. When upgrading from Symfony 3.4 to Symfony 7, or Laravel 5.5 to Laravel 11, the landscape of PHP has changed dramatically. The old paths—deprecated APIs, removed extensions, changed behaviors—no longer lead to water. But a complete rewrite, like abandoning familiar territory entirely, risks the survival of your business.
Fortunately, nature offers another pattern: the strangler fig. This African vine doesn’t attack its host—it gently grows around it, eventually replacing it while providing new structure. In software, the Strangler Fig Pattern lets you incrementally upgrade your legacy PHP frameworks by gradually replacing pieces with new microservices. Over time, the modern system grows while the monolith shrinks, until one day you realize the legacy codebase has disappeared entirely.
That’s the strategy we’ll explore in this guide: using microservices to incrementally upgrade legacy PHP frameworks without the risks of a “big bang” rewrite.
The Monolith Modernization Challenge
A monolithic application is a single, unified codebase where all functionality is tightly coupled. This architecture makes modernization incredibly difficult—a single change can have cascading effects. The idea of upgrading the entire framework at once involves halting all new feature development to focus on a massive, risky migration. This often leads to a state of paralysis, where technical debt accumulates and the business falls behind.
Of course, you have alternatives. You could run both systems in parallel—maintaining the old monolith while building a new one from scratch. Or you could attempt a “big bang” rewrite, shutting down the old system and switching to the new one all at once. Both approaches have significant trade-offs we’ll examine shortly.
The Strangler Fig Pattern: A Path Forward
Coined by Martin Fowler, the Strangler Fig Pattern offers a powerful alternative—the name comes from a type of vine that grows around a host tree, eventually replacing it entirely. In software, this means building a new, modern system around the edges of the old one.
The strategy is simple: instead of rewriting the entire monolith, you gradually “strangle” it by replacing small pieces of functionality with new microservices. Over time, these new services grow and multiply, while the old monolith shrinks. Eventually, the legacy system is either completely replaced or reduced to a small, manageable core—perhaps a legacy admin interface or a specialized module that’s not worth the migration effort.
An Incremental Upgrade Strategy in 5 Steps
Here is a practical, step-by-step guide we’ve seen work for teams upgrading from legacy frameworks like Symfony 3.4 or Laravel 5.5 to modern versions running on PHP 8.2+.
Step 1: Identify the Seam (A Bounded Context)
The first step is to find a good “seam” in your application—a piece of functionality that is relatively isolated from the rest of the system. This is often called a “bounded context.” Good candidates for an initial microservice include:
- An API endpoint that is consumed by a mobile app or a third-party service—maybe the
/api/notificationsendpoint that’s already documented with an OpenAPI spec. - A self-contained feature like a notification service, a PDF generator using Dompdf, or an image processor with Intervention Image.
- A specific business domain that can be logically separated from the core—perhaps user profile management or billing history.
Avoid, though, the most central, heavily-coupled parts of your application: the user authentication system that’s referenced everywhere, the order processing workflow that touches dozens of tables, or the billing subsystem that financial reporting depends on. Those are better left for later, once your team has mastered the pattern.
Choosing a small, low-risk area allows your team to learn the process without jeopardizing the entire application. We typically recommend starting with something that has clear boundaries and good test coverage.
Step 2: Build the New Service
Once you’ve identified the functionality, build it as a new, independent microservice. This is your opportunity to use a modern PHP framework—perhaps Laravel 11 (released February 2025) or Symfony 7 (released May 2024)—along with current best practices you’ve been wanting to adopt.
For example, if you’re extracting a notification service, you might create a simple JSON API endpoint. Here’s what that looks like in a fresh Laravel 11 application:
<?php
// routes/api.php
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;
use App\Http\Controllers\NotificationController;
Route::post('/send-notification', [NotificationController::class, 'send']);
Of course, you’ll need to implement the NotificationController with proper validation, error handling, and possibly queue integration. The key point is that this service is developed, tested, and deployed completely separately from the monolith. You can use PHPUnit or Pest for testing, run Composer with --no-dev for production builds, and deploy to a separate server or container—all without touching the legacy codebase.
Step 3: Deploy the Proxy or API Gateway
This is the most critical part of the infrastructure. A reverse proxy—we commonly see teams use NGINX, Traefik, or an API Gateway like Kong or Apache APISIX—sits in front of all your traffic. Initially, it does nothing but pass every request straight through to your legacy monolith.
Here’s what that initial configuration looks like in NGINX:
# nginx.conf - Initial State
server {
listen 80;
server_name www.yourapp.com;
location / {
proxy_pass http://monolith_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
This setup ensures no disruption to the existing application while you prepare to make the switch. Of course, you’ll want to test this configuration thoroughly—perhaps using a staging environment first—to verify that all routes function correctly with the proxy in place. You might also want to add rate limiting, request logging, and health checks at this stage, since the proxy will be managing traffic for both the monolith and new services.
Step 4: Reroute the Traffic (“Strangle”)
With the new microservice deployed and the proxy in place, you can now “strangle” the old functionality. You update the proxy configuration to route traffic for the specific feature to the new service. All other traffic continues to flow to the monolith.
Here’s how that looks in practice:
# nginx.conf - Rerouting Traffic
server {
listen 80;
server_name www.yourapp.com;
# Route traffic for the notification API to the new microservice
location /api/send-notification {
proxy_pass http://notification_microservice;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Everything else still goes to the old monolith
location / {
proxy_pass http://monolith_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
This change—often only a few lines in your configuration—is small, easy to test, and instantly reversible if something goes wrong. We recommend testing with a canary deployment first: route a small percentage of traffic (say 5%) to the new service, monitor closely, then gradually increase. You have successfully replaced a piece of the legacy application with a modern service with zero downtime—which is precisely the beauty of this approach.
Practical Walkthrough: Verifying the Reroute
Let’s walk through how you might verify that the reroute works correctly. Suppose you’ve deployed your notification microservice on port 8081 and updated the NGINX config. Here’s a simple verification process:
First, test the new service directly (bypassing the proxy) to confirm it’s running:
$ curl -X POST http://localhost:8081/send-notification \
-H "Content-Type: application/json" \
-d '{"user_id": 123, "message": "Test notification"}'
{"status":"queued","message_id":"abc123"}
Then, test through the proxy using the same request:
$ curl -X POST http://www.yourapp.com/api/send-notification \
-H "Content-Type: application/json" \
-d '{"user_id": 123, "message": "Test notification"}'
{"status":"queued","message_id":"abc123"}
Of course, you’ll also want to verify that other endpoints still hit the monolith:
$ curl http://www.yourapp.com/dashboard
<!-- Should return your Laravel or Symfony dashboard HTML -->
If anything goes wrong, you can roll back the NGINX config instantly:
$ sudo nginx -s reload # After restoring previous config
We’ve found it helpful to automate these checks with a simple script that runs before and after deployment—this catches routing errors before they affect users.
Step 5: Rinse and Repeat
This is not a one-time event but an iterative process. Once the first microservice is running successfully—you’ve verified functionality, performance meets expectations, and the team is comfortable with the workflow—you can return to Step 1 and identify the next piece of functionality to extract.
At Durable Programming, we’ve seen teams move at different paces: some complete one extraction per sprint, others do one per month. The rhythm depends on your application’s complexity, team capacity, and business priorities. With each iteration, the modern part of your system grows, and the monolith shrinks. You might maintain a public dashboard showing percentage of traffic routed to modern services—turning the technical work into a visible, motivating metric.
Benefits of This Approach
When we’ve worked with teams implementing this pattern—whether upgrading from Symfony 4 to Symfony 6 or migrating from Laravel 6 to Laravel 10—we consistently see these benefits:
-
Reduced Risk: By breaking the migration into small, manageable chunks—each a few days or weeks of work—you avoid the immense risk of a single, large-scale deployment. If one extraction encounters issues, you roll back that specific service without affecting the rest of your application.
-
Immediate Value: Your team can start using modern tools and shipping new features—PHP 8.2’s readonly properties, Symfony’s new Messenger component, Laravel’s improved validation—without waiting for a multi-year rewrite to finish. Each microservice you deliver provides value immediately.
-
Improved Developer Morale: Engineers are happier and more productive when working with modern, well-supported technologies. It’s discouraging to work with deprecated features and unmaintained dependencies; this approach lets your team regain that sense of craft and progress.
-
Spreadable Cost: The cost of the upgrade is distributed over time—often across multiple fiscal years—making it much more palatable from a budget perspective. You can show progress and ROI quarterly rather than hoping for a payoff years in the future.
Alternative Approaches: A Comparison
Before we dive deeper into challenges, it’s worth comparing the Strangler Fig approach with the other main strategies teams consider. Each has its place—though in our experience, the incremental approach works best for most business-critical applications.
Big Bang Rewrite
You shut down the old system entirely and switch to the new one all at once.
Pros: Clean break, no long-term complexity of maintaining two systems, potentially faster if everything goes perfectly.
Cons: Extremely high risk—if the new system has undiscovered bugs or performance issues, your business goes down. Takes years of engineering effort with zero visible progress. Feature parity is rarely achieved, leading to “rebuy” of features. We’ve seen this approach fail more often than succeed.
When it might work: Small applications (< 10k LOC), non-critical systems, or when the legacy codebase is truly irredeemable.
Parallel Run
You maintain both systems running simultaneously, gradually shifting traffic—but this is different from the Strangler Fig. In a true parallel run, you duplicate functionality in both systems and keep them in sync manually or via shared databases.
Pros: Allows thorough testing with live data, provides fallback option.
Cons: Maintaining two complete systems doubles your work. Data synchronization becomes a nightmare—how do you keep user records, orders, or sessions consistent across both? This approach often leads to subtle bugs as features drift apart. Teams burn out from context-switching.
When it might work: Very short transition periods (a few weeks), or when the new system can’t fully replicate legacy functionality initially and you need a fallback.
Strangler Fig Pattern
You gradually replace pieces with new services, as described in this guide.
Pros: Low risk, continuous delivery of value, learn as you go, reversible at each step, business keeps running.
Cons: Some duplicate effort during transition (old and new code both exist), introduces operational complexity of distributed systems, requires careful API design and versioning.
When it works best: Large, business-critical applications where downtime is unacceptable and the business needs to continue shipping features.
In our view, the Strangler Fig Pattern’s advantages far outweigh its drawbacks for most organizations—but understanding the alternatives helps you make an informed decision.
When we’ve worked with teams implementing this pattern—whether upgrading from Symfony 4 to Symfony 6 or migrating from Laravel 6 to Laravel 10—we consistently see these benefits:
-
Reduced Risk: By breaking the migration into small, manageable chunks—each a few days or weeks of work—you avoid the immense risk of a single, large-scale deployment. If one extraction encounters issues, you roll back that specific service without affecting the rest of your application.
-
Immediate Value: Your team can start using modern tools and shipping new features—PHP 8.2’s readonly properties, Symfony’s new Messenger component, Laravel’s improved validation—without waiting for a multi-year rewrite to finish. Each microservice you deliver provides value immediately.
-
Improved Developer Morale: Engineers are happier and more productive when working with modern, well-supported technologies. It’s discouraging to work with deprecated features and unmaintained dependencies; this approach lets your team regain that sense of craft and progress.
-
Spreadable Cost: The cost of the upgrade is distributed over time—often across multiple fiscal years—making it much more palatable from a budget perspective. You can show progress and ROI quarterly rather than hoping for a payoff years in the future.
Potential Challenges and Considerations
While this strategy is powerful—we’ve seen it succeed where other approaches failed—it’s important to be aware of potential complexities before you begin. The Strangler Fig Pattern introduces challenges that your team needs to manage deliberately. Let’s examine them honestly:
Distributed System Complexity: Once you have two or more services communicating over the network, you face issues that don’t exist in a monolith. Network latency—though typically measured in just a few milliseconds within a data center—can accumulate. Partial failures become possible: your notification service might time out while the main application continues. Of course, these aren’t insurmountable problems, but they require different design patterns: timeouts, retries with exponential backoff, circuit breakers (like PHP-Cache’s CircuitBreaker or Laravel’s built-in retry mechanisms), and graceful degradation.
Data Consistency: In a monolith, multiple transactions touching different tables happen atomically within a single database connection. With microservices—each potentially with its own database—ensuring consistency across service boundaries becomes harder. You might need to adopt eventual consistency patterns, use message queues like RabbitMQ or Apache Kafka to coordinate changes, or implement the Saga pattern for multi-step transactions. This isn’t trivial; it’s a shift in how you think about data integrity. Many teams, though, find that they overestimated their need for strong consistency in the first place. Bounded contexts—like a notification service—often don’t actually need distributed transactions; they can operate off read replicas or event streams with acceptable staleness.
Operational Overhead: Each additional service means another codebase to deploy, monitor, back up, and secure. Your team needs containerization experience (Docker), orchestration knowledge (Kubernetes, AWS ECS, or even simple systemd services), and observability tooling (Prometheus for metrics, Grafana for dashboards, ELK stack or Loki for logs). The learning curve is real. Our advice: start with one or two services, master the deployment workflow, then expand. Don’t try to containerize everything on day one—you can run PHP microservices on traditional VMs with NGINX and Supervisor for process management.
Testing Complexity: Integration tests that once ran against a single database now need to test multiple services. We recommend contract testing using tools like Pact to verify API compatibility, as well as end-to-end tests that spin up all services in Docker Compose. But of course, the goal is to have enough unit and integration tests at the service level that you don’t need massive end-to-end suites. That’s another reason to choose bounded contexts carefully—you want services with clear, narrow responsibilities that can be tested in isolation.
Team Skill Requirements: Your developers need to understand service-oriented architecture principles: API versioning, backward compatibility, fault tolerance, and distributed tracing. They should be comfortable with tools like cURL, Postman, or Insomnia for API testing, and know how to use OpenAPI/Swagger to document interfaces. If your team is fresh to these concepts, budget time for learning—and consider bringing in a consultant for the first few extractions.
Latency and Performance: A monolithic Laravel application might make five database queries to render a page. After splitting into three services, that same page now involves HTTP requests between services—adding overhead. You’ll need to think about service-to-service communication: REST with JSON (simple, widely supported), gRPC (faster binary protocol, better for internal services), or GraphQL (if you need flexible querying). Each choice involves trade-offs. Also, consider co-locating services to minimize network hops, perhaps placing them on the same VPC or even the same host.
Database Duplication: You might find yourself duplicating reference data (like user IDs, product codes) across multiple service databases. This is a known pattern—sometimes called “database per service”—but it introduces synchronization challenges. We’ve seen teams use change data capture (Debezium) or scheduled sync jobs to keep critical data aligned. In other cases, shared databases for truly global data are acceptable, though that blurs the service boundaries. You need to decide what makes sense for your domain.
Security Surface Area: More services mean more endpoints to secure. Each API needs authentication (likely JWT or API keys), authorization checks, rate limiting, and input validation. A compromised service could potentially pivot to others if your network segmentation isn’t solid. We recommend treating each microservice as if it’s publicly exposed, even if it’s internal—use TLS for all inter-service communication, implement mutual TLS if feasible, and apply the principle of least privilege for service-to-service calls.
Versioning and API Lifecycle: How do you evolve APIs without breaking existing consumers? You’ll need a strategy: semantic versioning with clear deprecation timelines (e.g., keep v1 endpoints for six months after releasing v2), feature flags to switch gradually, or header-based routing. The OpenAPI spec becomes essential for documenting contracts. This isn’t just technical—it involves communication with any teams consuming your services, whether internal mobile developers or external partners.
Cost: More services can mean higher infrastructure costs—multiple servers or containers, additional monitoring tools, more complex CI/CD pipelines. That said, you might also gain efficiency: independent scaling (scale only the notification service during peak times), better resource utilization through container scheduling, and the ability to use cheaper hardware for non-critical services. Do a rough TCO analysis before committing; in our experience, the benefits typically outweigh costs after the first 3-4 services, but YMMV.
Monitoring and Debugging: When a request goes through three services, where did it fail? You need distributed tracing—tools like Jaeger, OpenTelemetry, or Laravel Horizon’s built-in tracing. Logs should include correlation IDs that flow through all services. This infrastructure takes time to set up and maintain. Without it, debugging feels like investigating a crime scene with no evidence.
The Biggest Pitfall: Premature Decomposition We’ve seen teams get excited and break their monolith into dozens of tiny services before they have the operational maturity to handle it. The result is chaos: deployments taking hours, debugging impossible, and costs ballooning. Your monolith, though complex and aging, at least has the advantage of being a single, coherent system you understand. Don’t trade that for distributed chaos.
Our advice: extract one service at a time, get comfortable with the new complexity, and only proceed when you have reliable deploy and rollback processes. Remember, the goal is to make progress, not to achieve some arbitrary microservices count.
One approach that can mitigate many of these challenges: start with modular monolith first. Refactor your legacy application into well-defined modules within the same codebase, using clear interfaces and perhaps Symfony bundles or Laravel packages. Once you have a clean modular structure, extracting a module into a service becomes much easier. This gives you some of the architectural benefits without the full distributed systems complexity—and you can always extract later when you’re ready.
Ultimately, these challenges don’t make the Strangler Fig Pattern unworkable—they represent the price you pay for reduced risk and continuous delivery. Most teams we’ve worked with find them manageable, especially when they go in with eyes open and build operational capabilities incrementally.
Conclusion
Upgrading a legacy PHP application—whether it’s Symfony 3.4 reaching end-of-life or Laravel 5.5 missing out on years of improvements—doesn’t have to be a high-stakes gamble. The microservices-based Strangler Fig Pattern provides a proven, low-risk, and iterative path to modernization. By gradually replacing your monolith piece by piece, you can adopt modern PHP frameworks running on PHP 8.2 or later, improve security and performance, and deliver continuous value to your business.
Where should you start? We recommend:
-
Audit your application this week to find 3-5 candidate services—look for those with clear boundaries, good test coverage, and stable external APIs.
-
Pick the smallest, least critical candidate for your first extraction. This is your learning project—choose something where failure wouldn’t be catastrophic.
-
Build it in a modern framework you want to adopt—Laravel 11 if you’re currently on Laravel 5.x, Symfony 7 if you’re on Symfony 3-4. Get comfortable with the new framework’s conventions and tooling.
-
Set up your proxy in a staging environment first—get the NGINX configuration working, verify routing, practice canary deployments.
-
Execute your first strangle during a low-traffic period, with monitoring alerting and a clear rollback plan. Document everything.
-
Retrospect and iterate. What went smoothly? What surprised you? What operational gaps did you discover? Address those before the next extraction.
Remember: the goal isn’t to become a microservices purist. It’s to upgrade your application safely and sustainably. Some teams find that after extracting 3-5 services, they continue with the monolith’s remaining core for years—and that’s perfectly fine. Others continue until the monolith disappears entirely. Both outcomes are valid.
If you’d like help applying this pattern to your specific situation—whether you’re running Symfony on Ubuntu 18.04 or Laravel on Alpine containers—we at Durable Programming have guided dozens of teams through this journey. Every situation is different, but the principles remain the same.
Start small. Learn as you go. Keep your business running. That’s how you successfully upgrade legacy PHP frameworks—without betting the company on a big bang rewrite.
Sponsored by Durable Programming
Need help with your PHP application? Durable Programming specializes in maintaining, upgrading, and securing PHP applications.
Hire Durable Programming