Don’t just think database DevOps. Think compliant database DevOps.

The 2018 Accelerate State of DevOps Report from DORA specifically calls out database development as a key technical practice which can drive high performance in DevOps. It’s an interesting shift in our understanding of what should be included in DevOps workflows because it moves the database from being a bottleneck in the process to being a participant.

At the same time, however, new data protection laws are coming into play and consumers are more aware than ever before of how their privacy can be compromised. So how can the promise of releasing changes to the database faster and easier be balanced with the need to keep data safe and remain compliant with legislation?

The answer lies in in going one step further than database DevOps and thinking about compliant database DevOps. The same broad workflow but one where protecting data is baked into the development process from the beginning by combining the agility of DevOps, the desire to include the database in DevOps, and the necessity to secure data throughout development.

There are four key areas where this compliant approach can help organizations deliver value faster while keeping data safe.

Introduce standardized team-based development

The way databases are developed has changed over the last few years. Once the sole responsibility of Database Administrators and database developers in siloed teams, application developers are increasingly expected to write the code for the database as well as the application.

Writing database code, however, uses a language like T-SQL, the syntax of which is not as strict as application languages like C#. Consequently, a number of different styles have emerged.

Some developers, for example, prefer plain black type rather than seeing type in different colors, others hate indents, and arguments about whether commas should be at the beginning or the end of a line go on.

All of which can end up with the code behind a database being muddled and difficult to understand, particularly when different developers have worked on the same code base over time. Where teams of developers are updating a database repeatedly, they can collaborate much more easily if all of the code in the database is standardized in the same style.

Similarly, like application code, T-SQL has its own share of ‘code smells’ – errors in code which can affect performance, as well as those which could be breaking changes. Teams should also therefore adopt tools which use a static code analyzer to flag up potential errors and pitfalls in code as it is typed. This can be particularly useful when first starting to code in T-SQL, or if there are specific rules that have to be followed by everyone on the team. It also provides a quality control gate at the point at which code is written so that before it is even committed, any potential issues have been minimized.

Probably the most useful and relevant team-based process to adopt is version control, which is standard in application development and involves developers checking their changes into a common repository frequently during the development process. As a direct result, everyone has access to the latest version of the code and it’s always clear what was changed, when it was changed, and who changed it.

This is just as true for the database code, which should also be version controlled, preferably by integrating with and plugging into the same version control system used for applications. By capturing the changes in version control, one source of truth can be maintained, reducing the chance of problems arising later in the development process, and also providing an audit trail of every change that has been made.

Automate deployments

Introducing version control to database development brings many advantages. Ad hoc changes and hot fixes are minimized, for example, reducing the chance the database will drift from its expected state. Every developer also works from the same code base, so there are fewer errors when different development branches are merged, and the audit trail of changes can be useful in demonstrating compliance.

A bigger reward awaits too because it opens the doors to the automation that DevOps encourages, while making the development process more secure.

Once version control is in place, continuous integration (CI) can be implemented to trigger an automated build as soon as code changes are checked in, which tests the changes and flags up any errors in the code. If a build fails, it can be fixed efficiently and re-tested, so a stable current build is always available.

A typical CI server in application development executes a series of commands that build the application. These commands clean directories, compile the source code, and execute unit tests. Where applications rely on a database back-end, the CI server can also perform the additional tasks of testing and updating the database.

This will generate an artifact which includes the deployment script required to update the database schema and any version-controlled static data, as well as additional information to make sure the release is safe. The artifact can then be tested by deploying it against a staging database and reviewing it to confirm it is production-ready.

Finally, it can be added to the same release automation tool used for the application, like Octopus Deploy or Azure DevOps Services, which provides another audit trail of the changes made and who approved them. This gives organizations the option to include the release of database changes with the workflow already in place for the application, rather than have to introduce a new, unfamiliar process.

Monitor performance and availability

It is normal practice to monitor databases to keep an eye on factors like memory utilization, I/O bottlenecks and growth trends. The increase in the size and complexity of SQL Server estates has already prompted many organizations to introduce third party tools to give them a wider and deeper picture. Adding DevOps to the equation makes it even more important.

While the automation which DevOps introduces to many parts of database development minimizes errors and gives much better visibility across the whole process, there is a flipside. Instead of releasing changes to the database once or twice a quarter, changes can now be released at any time.

Even though they will probably be the small, iterative ones which DevOps encourages, there is still a chance they will cause problems when they are deployed, particularly if databases are under heavy load or there are unexpected differences between environments.

Given the importance the database has to many business operations, organizations should be able to spot queries having an unusual impact, deadlocks and blocking processes – and be able to drill down in seconds to the cause.

New data protection regulations also now require organizations to monitor and manage access to personal data, ensure data is identifiable, and report when any breaches occur. This makes an advanced monitoring solution a necessity in most cases, in order to monitor the availability of servers and databases containing personal data.

Given the added complexity it brings to monitoring, organizations should look for a solution that offers the extra capability but makes taking advantage of it easier. By, for example, allowing all SQL Server instances, availability groups, clusters, and virtual machines to be viewed on one central web-based interface. And by having customizable alerts that can be configured to suit SQL Server estates of any size and complexity.

Protect and preserve data

Including the database in DevOps enables the full advantages of DevOps to be realized without the database being a counterweight. The new requirement for compliant DevOps, however, which requires data to be protected all the way through the development process, adds another factor.

Redgate’s 2018 State of Database DevOps Survey showed that 67% of developers want a copy of the production database in their development, test, or QA environments to ensure changes will work once deployed. This helps find problems sooner before they get to production, yet those same production databases invariably contain the sensitive data that needs to be protected.

As a result, data masking tools which pseudonymize and anonymize data are now being adopted to provide database copies that are truly representative of the original and retain the referential integrity and distribution characteristics. Indeed, Gartner’s 2018 Market Guide for Data Masking predicts that the percentage of companies using data masking or practices like it will increase from 15% in 2017 to 40% in 2021.

While such masking can help when provisioning copies of production databases for use in development and testing, it can also lead to resource issues. It is not unusual for databases to be 1TB in size or more, and provisioning copies to multiple developers can take up a lot of time as well as space.

This is where the tried and tested virtualization technologies built into the Windows operating system come into play. SQL Clone from Redgate, for example, uses it to create copies, or clones, of databases in seconds which, while only around 40MB in size for a 1TB database, work just like normal databases and can be connected to and edited using any program. It can also be integrated with data masking to provision masked database copies with a process that is simple, fast repeatable, transparent, and auditable.

Summary

These are interesting – and challenging – times for database development.

DevOps has entered the conversation and provided a route to removing the database as the blocker in delivering value. By encouraging collaboration and integration, and moving to releasing small changes, often, database deployments change from worrying, infrequent problems to a standard part of the development and release workflows.

Data privacy and protection concerns have also interrupted the conversation with the GDPR now being joined by the upcoming Stop Hacks and Improve Electronic Data Security (SHIELD) Act in New York, the Consumer Privacy Act in California, and India’s Personal Data Protection Bill, among many others.

DevOps and data privacy do not need to oppose each other, however. Rather, they can complement one another. The automation and audit trails that DevOps processes introduce to database development can ease compliance with data protection regulations and enable organizations to balance the need to deliver software faster with the requirement to protect and preserve personal data.

To find out more about Redgate’s compliant database DevOps approach, visit the solutions pages.