How to minimize downtime in a cloud migration

Comments 0

Share to social media

Pat Wright details how to plan cloud migration with minimal downtime. Learn key cutover strategies, testing methods, and critical questions to avoid data loss and system failure.

Whenever I start a migration, one of the primary questions is: “how much downtime will be supported during the migration?” The answer is always none. That doesn’t usually work in most systems. Typically, the applications you are trying to move to the cloud are not necessarily modern systems, and they are not ready to handle distributed processing.

The goal in this step should be to begin discussing what it will take to move the system. Research the various methods you can use to move the data and the application. Methods such as blue/green deployments, A/B testing, and other ways to move the traffic between your on-premises system and the cloud.

Testing is your best way to start deciding what will and won’t work. This will also give you time estimates for what it takes to actually move the data, and the amount of downtime expected. Listed below are the key questions I always ask during this process.

The key questions you should always ask before moving data

Firstly, task an individual with testing various scenarios and simply moving the data around. How long does it take to move the data from point A to point B?

I then advise gathering the leaders/owners responsible for querying the applications’ databases. Start asking the questions listed below to find out what it means to the application when the data is moved. This is a TEAM effort – not just a DB effort. 

The exact questions to ask the team – and why they’re important

“What data is static (so can be moved just once), or is change infrequent enough to not require this?”

“If we switch over to the new location, what does rolling back look like? Can it be rolled back?”  This is critical to determine if this is a one-way process. This is the most common scenario, and extensive testing is needed.

“Can we take downtime and shut off everything for a clean cutover of the data?”
This is the simplest, cleanest option. You sacrifice speed with this option, but it ensures no data is written to other locations, and that you don’t have any bad data. 

“Are we losing any functionality in this database migration?” It may be that not every feature you had on-prem is available in the cloud. Make sure to know if something is functionally changing about the system.

“Who validates that the cutover is complete?” This is typically handled by quality assurance (QA) and testing teams, but it’s important to have a list of items to test and provide a ‘done’ stamp.

The downtime and cutover discussion is another way to involve everyone who works on the applications. It’s important to understand the applications and how they are used to make this project successful. I hope this advice helps with that process.

    Cloud adoption is accelerating, but database migrations aren’t keeping pace. Find out why.

    The Cloud Migration Divide explores why complex, business-critical databases remain on-premises – and what’s holding organizations back as confidence fails to scale with complexity.
    Download the free report

    FAQs: How to minimize downtime in a cloud migration

    1. Can cloud migrations be done with zero downtime?

    In most cases, no. Legacy systems often require at least minimal downtime due to dependencies and architecture limitations.

    2. What are the best strategies to reduce downtime during migration?

    Common approaches include blue/green deployments, A/B testing, and phased traffic switching between environments.

    3. Why is testing important before a cloud migration?

    Testing reveals data transfer times, potential failures, and helps estimate downtime and rollback options.

    4. What is a cutover in cloud migration?

    A cutover is the point when traffic and operations switch from the old system to the new cloud environment.

    5. How do you ensure data integrity during migration?

    By identifying static vs dynamic data, controlling writes during migration, and validating results through QA testing.

    6. Who should be involved in migration planning?

    It should be a team effort involving engineers, database owners, QA teams, and business users to ensure success.

    Article tags

    Load comments

    About the author

    Pat Wright

    See Profile

    Pat Wright is an Advocate with Redgate Software. He has been a database professional for 25 years, specializing in PostgreSQL for the past 10 years, after a long career with SQL Server. He has worked across large-scale SaaS platforms, early-stage startups, and a wide range of consulting engagements over the past decade. Pat currently serves as the Sponsor Coordinator for PGUS and as President of Utah Geek Events, and is a frequent speaker in both the PostgreSQL and SQL Server communities. His sessions draw on deep real-world experience with performance, automation, and operational best practices. Outside of tech, he enjoys photography, classic cars, and cycling.

    Pat Wright's contributions