Rebuild the Engine
Mid-flight
A Guide to Zero-Downtime WordPress Migrations
SUMMARY: See how we rebuilt a massive legacy site URL-by-URL while it was still live, boosting Core Web Vitals without a single minute of downtime.
There is a classic thought experiment called the "Ship of Theseus": if you replace every single plank of wood in a ship one by one while it is at sea, is it still the same ship?
We recently faced this paradox in a very literal sense. We were tasked with rebuilding a massive, high-traffic WordPress site for Parasoft. They contacted us because we were known to be able to handle this kind of build. The goal wasn't a visual rebrand; in fact, the requirement was for the site to look exactly the same to the end user. The objective was purely performance, stability, and paying down years of accumulated technical debt.
The site had been passed between other agencies over the years. It was a "black box" of legacy code, making it difficult to maintain and nearly impossible to optimize for modern Core Web Vitals (CWV). We couldn't just flip a switch and launch a new site; the traffic volume and business requirements demanded zero downtime.
So, we chose an incremental migration strategy. We decided to rebuild the site piece by piece, replacing the infrastructure underneath live traffic without the users ever noticing they had crossed the bridge from the old site to the new one.
Here is how we pulled it off.
The Architecture: "Hybrid" Headless
(Warning: Dev Speak Ahead! If you are here for the strategy, feel free to skip to the deployment section. If you are here for the code, read on.)
When developers talk about "modernizing" WordPress, the conversation often jumps straight to Headless architectures (Next.js, Gatsby, etc.). While powerful, full headless setups often introduce complexity regarding SEO, preview workflows, and initial load times.
For this project, we needed the best of both worlds: the raw speed and SEO stability of server-side PHP, and the rich interactivity and Developer Experience (DX) of React.
We settled on a Hybrid Architecture (sometimes called "Islands Architecture").
1. Above the Fold: Pure PHP
To guarantee excellent Core Web Vitals—primarily First Contentful Paint (FCP) and Largest Contentful Paint (LCP)—we stuck to the basics. The document shell, the global header, and the critical "above the fold" content are rendered using standard server-side WordPress PHP templates.
This ensures that when a crawler or a user hits the page, they get immediate, indexable HTML. There is no "loading spinner" while a JavaScript bundle downloads.
2. Below the Fold: Progressive React Hydration
Once that initial paint is complete, we bring in the modern power. We used React to handle complex components and "below the fold" content. This approach made the frontend significantly easier for our developers to maintain and allowed for snappy, app-like interactions (like filtering and dynamic calculators) without bloating the initial load.
Crucially, we optimized how React loads:
Defer & Delay: The React hydration bundle is loaded with the
deferattribute, so it never blocks the main thread during the initial render.Selective Hydration: We use inline scripts to call a custom
hydrateComponent()function. This registers the components, but hydration kicks off only afterDOMContentLoaded.Code Splitting: Each React component lives in its own bundle. We only load the JavaScript required for the components actually present on the current page.
The result is a site that "feels" like a Single Page Application (SPA) because page-to-page transitions are optimized and snappy—not because we are re-rendering the whole DOM, but because we aren't reloading heavy assets on every click.
The Deployment: A Seamless, Phased Rollout
(Technical Deep Dive: How we ran two sites on one domain)
The biggest constraint of this project was the "live" requirement. We couldn't just build the new site in a staging environment, flip the DNS records on a Friday night, and hope for the best. We needed to launch the site URL by URL, section by section.
To achieve this, we utilized Pantheon’s Advanced Global CDN (AGCDN) to implement domain masking. This allowed us to serve two distinct environments—the legacy site and the new build—under a single, unified domain.
The Routing Strategy
We didn't rely on a simple reverse proxy for the whole site. Instead, we used a granular, tiered approach to routing traffic:
The "Key 20": We started by identifying the 20 most critical pages. We manually configured edge logic rules to route these specific paths to the new Pantheon environment, while everything else "fell through" to the legacy host.
Dictionary Lists: As we scaled up our velocity, manual rules became inefficient. Pantheon opened up access for us to manage a list of paths. This allowed our team to simply add a batch of URLs to the list as soon as they passed QA, instantly routing them to the new site without needing a deployment or support ticket.
Wildcard Rules: The final phase for any post type was the "Wildcard" rule. Once we were confident that, say, all blog posts were successfully migrated, we implemented a path-based wildcard (e.g.,
/blog/*). This was the only step that required professional services intervention, marking the official retirement of that section on the old site.
The Sitemap Challenge
Running two sites on one domain creates a major SEO hazard: sitemaps. If the new site only knew about its own pages, and the old site only knew about the legacy pages, Google would constantly be seeing "incomplete" maps depending on which environment generated the file.
We solved this by dynamically merging them. When a search engine requests sitemap.xml, our new site:
Generates the map for all content it currently owns.
Fetches the auto-generated sitemap from the legacy site.
Merges the two XML streams into a single, cohesive file.
Serves the complete picture to the crawler.
This ensured that our SEO standing remained rock-solid throughout the transition, with zero 404s for discovery.
The Unified Experience: Algolia as the Bridge
Visual consistency is one thing, but functional consistency is another. How do you allow a user to search the site when half the content lives in Database A and the other half in Database B?
If we had relied on standard WordPress search, we would have had a "split brain" problem—users would only find results relevant to the specific environment they were currently browsing.
Algolia became our bridge.
We treated Algolia as the external "source of truth" for all content indexation. We implemented a flag system in the backend of the new site to track migration status:
Legacy Content: Indexed from the old site.
Migrated Content: Indexed from the new site.
By decoupling search from the WordPress database, the frontend search components (powered by React) didn't care where the data lived. They simply queried the Algolia index, which returned a seamless mix of old and new content. This made the physical location of the data irrelevant to the user experience.
Content Migration: Repeatable & Low-Stress
Migrating data for a massive site often leads to the dreaded "Content Freeze": a period of weeks where the marketing team is told to put pencils down. We couldn't afford that.
We treated the migration not as a one-time event, but as a repeatable pipeline.
The Snapshot: We ran a "bulk" export/import to seed the new environment.
The QA Phase: While our team tested the new templates, the client continued editing on the old site.
The Delta: Right before going live with a section, we re-ran the migration script to catch only the "diffs"—new posts or edits made since the initial snapshot.
Because the process was scriptable and repeatable, the actual "freeze" window for any given section was reduced to hours rather than weeks.
The Invisible Redesign
By the end of the project, we had completely replaced the underlying infrastructure of the site. We improved Core Web Vitals across the board, moving strictly into the "Green" zone for LCP and CLS. We dramatically improved the editor experience for the client team. And we did it all without the users ever realizing the site had changed.
Sometimes, the most impressive engineering isn't the flashy new design that everyone talks about—it's the invisible work that makes the experience faster, stronger, and more reliable, all while the ship is still sailing.
More
insights
©2026 300FeetOut All Rights Reserved | Privacy Policy