Why database observability is key to successful cloud data platform adoption

Data is the lifeblood of businesses all over the world, from the smallest startup to the largest enterprise. Making sure that it’s available when you need it, secured for authorized use, and recoverable from faults is vital to operating data platforms, no matter where your business is on its cloud journey. This can only be achieved by putting the right data into the hands of the right people, in a timely way, to make the right decisions about how to manage that platform effectively.

With its support for availability, security and recoverability, Redgate Monitor is a key tool for operational, development, and transformation teams working with data.

Three steps to cloud data platform

The journey to adopting a cloud-first approach to data platform in the cloud typically comprises three steps for most businesses – migrate, optimize, and modernize. This approach helps businesses to minimize risk and get the most value from their cloud deployments.

Migrate – it’s not just technology

The migration of data and databases from on-premises systems to cloud is relatively trivial, especially when doing a ‘lift and shift’ to virtual machines in AWS, Azure, or GCP. However, it’s also very easy to get it wrong because of the subtle differences in the way cloud works. Performing due diligence through discovery and analysis of the existing environment to understand workload requirements, seasonality, and dependencies allows technical teams to select the right migration pathway and remove blockers.

A very common challenge with ‘lift and shift’ is the cost implication for organizations as they move their on-premises configuration to the cloud. While infrastructure as a service (IaaS) is largely the same operational model as on-premises, the cost model is often radically different. Whereas on-premises we would size infrastructure for a three-to-five-year lifecycle, and build in capacity to grow, in the cloud we need to pay for all resources we allocate. It’s important to adjust the organization’s mindset to account for the flexibility and on-demand nature of resource availability the cloud provides. Being able to identify consistent resource use, and when and where usage spikes, becomes very important when sizing the cloud infrastructure that supports migrated workloads.

The benefits of understanding workload seasonality in the migration scenario include the ability to minimize excess resource allocation for infrastructure, and as a result to optimize spend. This also gives organizations the ability to leverage offers from cloud providers (such as compute or instance savings plans for resource they know will be used), and take advantage of deeper discounts over multi-year commitments.

Optimize – price performance is your KPI

Database workload optimization in the cloud becomes a key part of the way the database team operates. Switching from a reactive to a proactive stance is key to making a successful transition to the cloud, not only technically but also from a cost control perspective.

In the case of a ‘lift and shift’ to IaaS, there are benefits from moving to a platform as a service (PaaS) option. Moving from SQL Server on EC2 virtual machines to RDS for SQL Server, for example, can greatly simply the work of operational teams without the need to re-engineer databases. This will allow the operations team to focus on managing the database rather than the complexities of the infrastructure.

Irrespective of whether you’re running IaaS or PaaS, there needs to be a focus on workload performance efficiency. This isn’t just a case of making user interactions with the platform smoother; it has a tangible impact on operating costs. This FinOps approach, where operational and development teams take increased ownership of cost and cost control, means that running costs are optimized as part of daily activities.

Taking this a step further and shifting database observability left into the development cycle also makes it possible to set cost as a release metric for development work. Being able to gauge the impact of new or revised code to see if it increases resource usage allows us to identify and prevent unexpected cost increases before they reach production. Being able to identify an 8% increase in resource consumption and extrapolate that to the impact it will have on cost and resource usage in production is vital for preventing an unexpected bill at the end of the month.

Modernize – agility, flexibility, responsiveness

The final stage in the migration to cloud-native is the modernization of the data platform. In the context of database systems, this represents the adoption of vendor-managed database solutions such as Azure SQL Database, Amazon Aurora for PostgreSQL, and Google Cloud SQL.

Many organizations are looking at ways to optimize their budgets, and an increasing number are looking at migrating from a proprietary platform such as Oracle or SQL Server to PostgreSQL. Having a single database observability solution in place helps the technical teams performing this modernization deliver a successful outcome more quickly, by giving them the ability to see side-by-side resource consumption for the same workload. This also builds confidence with key stakeholders that the new platform will meet their needs, by presenting them with facts and the data supporting them.

Another facet of the modernization to a managed cloud database platform such as Amazon Aurora for PostgreSQL is the ability to leverage built-in scalability features. Modern serverless capabilities in these platforms allow them to flex dynamically to optimize resource use, meaning that these platforms can scale with the workloads they run. This isn’t always the case with older proprietary solutions, which run in a traditional IaaS model that requires far more management, planning, and disruption to flex with the needs of the organization and workload.

Database observability – optimizing cloud operations

Key to operating workloads in the cloud is putting the right information into the hands of the right people, at the right time, to allow them to make the right decisions. Having the right database observability solution in place to facilitate this is vital. A lack of visibility into the data platform can result in performance and availability issues taking longer to recover from, as well as businesses incurring unexpected costs.

Redgate Monitor provides broad coverage over the most frequently-used database engines, including SQL Server, Oracle, and Postgres, whether you’re running a hybrid, single, or multi-cloud infrastructure. It gives teams across the business visibility into key metrics, helping them proactively optimize workload performance and control costs before they become a problem. All of this in a single pane of glass that can be used by operations staff, development teams, helpdesk, or even product owners within the business.

As well as helping key stakeholders self-serve operational insight, broadening visibility to those outside of the traditional IT function helps build trust in the platform,  whether that’s empowering business product owners, power users and advocates, or giving senior leaders on-demand visibility to service status without needing to ask someone for a report. This expansive use of database observability builds a culture of collaborative operational excellence which fosters trust and improves data platform outcomes.

If you want to see just how Redgate Monitor can help meet your database observability needs in the cloud, you can find more in-depth information at the links below: