Is your DevOps strategy missing a vital link?
DevOps is now the norm for virtually every IT team in every sector. By integrating and automating software development processes, it removes laborious, manual tasks and enables teams to deliver value to end users faster and more efficiently. While often associated with application development, it’s the same for database development with Redgate’s 2024 State of the Database Landscape survey showing that 73% of organizations have already adopted Database DevOps or are planning to do so in the next two years.
In many ways, that’s to be expected. The size and complexity of data is growing, the different types of databases now being used is increasing, and Database DevOps provides a common blueprint teams can use to develop, manage and monitor their databases, wherever they are, whether on-premises or in the cloud.
All of which is good news. If you’ve introduced Database DevOps, you’re releasing application and database changes faster and more reliably. You might even be one of the top performing 18% of organizations in the Database Landscape survey who are far ahead of the competition and can deploy changes to production in less than one business day. If you haven’t yet introduced Database DevOps, or you’re not one of the 18%, it’s a good benchmark to aim for and measure your performance against.
How can you ensure you’re in the 18%?
Including the database in DevOps introduces the same practices and processes seen in application development which version control, test and validate code all the way through the pipeline to avoid breaking changes reaching production. It’s a neat, well-proven approach and it works:
Development is streamlined, testing is automated with practices like continuous integration, and the way developers and teams work is standardized. As a result, errors are reduced, the quality of code is increased, and deployments become faster and more reliable.
What holds many teams back from being one of the 18%, however, is the quality of the data they use in their development and testing environments. This starts right at the beginning of the process at the ‘Provision’ stage when developers are given copies of the production database to test their changes against. It has an impact all the way through the rest of the process and, quite simply, teams can lose the race before it even starts.
The importance of test data
Using an actual copy of the production database shouldn’t be an option here. Instead, the Personally Identifiable Information (PII) and sensitive data in the database needs to be masked, de-identified or replaced with synthetic data. The advantages of having this truly representative copy then flow all the way through the pipeline.
When tests fail at the continuous integration stage, for example, code can be immediately rewritten to correct any errors far earlier in the process, when it’s cheaper and easier to fix them. With version control in place, everyone in the team has access to the latest fully working version of the code. As code progresses through Testing, QA and Staging environments, teams can be confident they can deploy the changes without problems.
That’s the goal and the practice of what is often referred to as Test Data Management (TDM). With a good TDM process in place, the right test data is provisioned to the right people at the right time without putting personal data at risk. By default, the quality of the code being written goes up, the time it takes to develop code goes down, and changes can be released faster and more reliably. Welcome to the 18%.
The signs you need a TDM approach
The importance of needing a copy of the database in development and testing environments is already well understood by IT teams. The practices for doing so, however, are varied and are often held back by infrastructure constraints, time pressures and system complexities. There are three signs that you need a TDM approach to look for.
Shared development environments are the first and are often the biggest hurdle developers face when writing quality code. Rather than each developer having their own copy of the database to develop and test their changes against, they all connect to the same copy. As a result, developers can – and often do – overwrite each other’s changes, causing conflicts and further errors. Experimentation is also discouraged because it might introduce breaking changes. Worst of all, it slows down development because only one feature can be worked on at a time.
Shared development environments are often seen in organizations with large databases which are difficult to copy and provision to developers because of space limitations on machines or networks, or where there are many developers in different locations, making the provisioning process complicated and difficult to maintain.
Slow development environments are the second challenge and are common in larger teams. While developers have a dedicated development environment with their own copy of the database, getting access to the latest, most up-to-date copy takes time … a lot of time. They’re left in the position where they must test their changes against an old copy which is no longer representative of the production database as it is now, or wait hours or even days for a refreshed copy.
This issue is often caused by the time and space issues DBAs face when creating database copies, keeping track of which developer has which version of the copies, and provisioning copies to many developers at the same time.
Stale development environments are the third and the most flawed sign. Whether developers are working with a shared or dedicated development environment, the database copy is older than a month and there is no process in place for refreshing it on a regular basis. Developers can test their changes against it, but they are no longer confident in the results of the tests.
Here, the problem is often the production database which might be large and complicated and difficult to copy, or sensitive data inside the database which may take hours or days to discover, mask or de-identify.
The rewards of a TDM approach
A good TDM approach changes the game by providing a structure for the way data is managed and provisioned in test and development environments. At its best, it introduces a streamlined, automated process that enables DBAs to provision test and development environments and enable developers to self-serve those environments in seconds.
Using established data virtualization technology and containers, for example, the time and space issues which hinder teams from provisioning multiple database copies to many developers disappear. Instead, small and lightweight copies a fraction the size of the original can be provisioned, drastically reducing space requirements and encouraging rapid testing and development. Developers are free to experiment, knowing they can self-serve a refreshed database copy on demand.
Similarly, the often difficult task of protecting Personally Identifiable Information (PII) can also be replaced with an automatic classification process that finds and categorizes sensitive data and either masks it or substitutes it with realistic data.
As a result, DBAs and data teams are freed from the laborious, manual effort of copying large, unwieldy production databases, identifying difficult to find sensitive data, and masking it or replacing it with anonymous data. Rather than hours, it takes minutes and database copies can be provisioned to multiple developers in seconds. It gives IT teams a route to join the 18%.
As Ryan Burg, DevOps Manager of Surgical Information Systems which introduced Redgate TDM technology to standardize database provisioning to its development and testing teams, comments: “We want to get new features in our customers’ hands. Now we can do twice the amount of testing in the same amount of time, we have better testing, and we’ve lowered that cost. We are also lowering the time to market.”
Discover the advantages of Redgate Test Data Manager
To address the provisioning challenges DBAs and developer face every day across SQL Server, PostgreSQL, Oracle and MySQL databases, Redgate Test Data Manager was built from the ground up to optimize every aspect of TDM. By automating data classification, masking and test data provisioning, it streamlines database development and delivers truly representative yet sanitized database copies to developers when they need them, the moment they need them. As a result, the quality of code goes up, the frequency of failed deployments goes down, and teams can release features and updates to customers faster.
Find out how Surgical Information Systems (SIS) achieved time savings of 12 hours a day using Redgate’s test data management solution. Read the case study.
Learn why the leading independent analyst and research house, Bloor, thinks enterprise-level organizations are increasingly and acutely aware of the benefits of a TDM solution. Download Bloor’s Test Data Management 2024 Market Update.
Read more about how enabling DevOps Test Data Management can improve your release quality, reduce your risk, and deliver value to customers sooner. Visit the resource page.