What unique technical value does Redgate’s Database DevOps solution provide?

Why a GitHub account or an Azure DevOps subscription isn't enough to implement database DevOps.

A colleague recently relayed a great question from the community: why isn’t a hosted development, collaboration, and automation environment such as Azure DevOps enough by itself to implement Database DevOps?

In short, generalized DevOps development and automation tooling does an excellent job at hosting version control repositories and at enabling automation.

But when it comes to CI/CD for databases, there is a significant need for additional functionality. To apply version control and automation to a database, you need to:

  1. Organize database code in version control
  2. Validate database code based on this organization in a CI process
  3. Deploy changes to database code in a reliable, repeatable manner
  4. Provision databases on-demand to provide database environments the same level of flexibility which application environments already enjoy

These four areas are where Redgate solutions provide unique technical value in the DevOps marketplace.

We aren’t the only ones who’ve recognized these gaps in the marketplace, but we work hard to provide the best solution in these areas. As our customers and the wider DevOps community are rapidly growing and changing, this requires us to constantly innovate and evolve our solution.

There are many popular hosting options for version control and orchestration. Our customers have told us that they prefer to select the combination of vendor tools which best fits their unique needs. For this reason, Redgate designs our database solutions to plug into popular DevOps development and automation tooling. Additionally, we provide command line functionality to ensure our solution works with the broadest possible mix of vendors.

Following this quick summary, let’s dig in a bit to these four big-picture areas of value.

1. Organizing database code in version control builds the foundation for automation

From a very high level, it seems like if you’ve got a Git repository and a database, it can’t be terribly complex to get the database code into the repository.

In a way, this is true: many vendors provide an IDE where you can script out the contents of a database in one way or another, and you can commit that script to a version control repository.

However, that script won’t be organized in a way where you can work with it over time.

To work with code in version control, you need a thoughtful and consistent way of organizing the code.

There are multiple ways to implement version control for database code:

  1. “State” approach based on object definitions: This approach stores “CREATE” scripts representing the definition of each object in version control. Deployment scripts are automatically generated by comparing the object definitions in a target database with the object definitions stored in version control.
  2. “Migrations” approach based on alteration scripts: This approach captures incremental change scripts in version control. These scripts might be stored in a series of numerically ordered scripts, for example. In a deployment, these migration scripts are applied in sequence to update a target database.
  3. A “hybrid” or combined approach which leverages both: While the state approach excels in its simplicity and approachability for developers, the migrations approach excels at giving developers a high degree of control over deployment code – which is very desirable for valuable production databases. A combined approach brings the best of both worlds.

Q: Can you solve this problem without vendor tools? It’s absolutely possible to develop your own system for versioning database code. For example, I worked at an organization where we did this very thing in the early 2000’s, at which time vendors didn’t offer solutions for this problem. However, we found it time consuming to train new developers on the process and expensive to build our own tooling to ensure that code was being versioned properly.

2. Validating database code early reduces failure rates later

Versioning database code is valuable for a few reasons – it provides an audit trail of changes and a reliable method for teams to evaluate.

But versioning is particularly valuable for enabling the ability to regularly validate code. This is critical for shifting left quality controls into a CI/CD process.

General orchestrators and automated deployment solutions alone can’t validate database code. They lack an understanding of the logic or organizing principle of how the code is stored.

Redgate’s solutions have this understanding and functionality built in.

Redgate adds value to the DevOps process by providing methods for developers to validate their code locally at the earliest stages of development. We also provide the ability to automate validation and tests in database builds, which may be automatically triggered in popular branching and merging workflows.

Without the ability to quickly validate code early in the check-in process, it is incredibly easy for even seasoned and experienced developers to commit and promote flawed database code with problematic syntax. This code may cause deployment failures and/or increase the lead time for code to be deployed to production.

3. Deploying changes reliably and consistently reduces cognitive load

The third technical area where Redgate adds value is in providing convenient ways to deploy code reliably and consistently.

As with code validation, doing this requires an understanding of how the code is versioned. In some methodologies, it also requires an understanding of the way that dependencies work in the specific database platform you are using.

For example, both the “state” and “hybrid” methods of versioning code described above require deployment technology which has the knowledge to:

  • Generate object scripts for a target database on the fly
  • Compare those scripts with the object scripts stored in version control
  • Create a deployment script based upon the difference, and order this deployment script in a way that functions properly given the requirements of the database platform

Example requirements of database platforms include rules regarding how to change an object, such as a table, when dependencies upon that object exist—for example views, indexes, triggers, and stored procedures.

While deployment methodologies are more straightforward with a “migrations” approach based on alteration scripts, Redgate’s solution adds additional value with features to logically separate different types of changes. For instance, the solution first deploys changes to objects such as tables, followed by “programmable objects” like stored procedures, functions, and views, which are likely to be dependent upon changes to tables.

Redgate’s “hybrid” approach leverages both state-based comparisons to detect and report on drift prior to deploying to an environment, as well as provides full reports on exactly which migrations will be deployed to a target environment in order to comply with change management requirements.

Tooling which provides this functionality automatically for a team significantly reduces the complexity and knowledge required to work with databases for developers, which increases productivity.

4. Quick provisioning brings databases into modern DevOps patterns

The fourth area where Redgate uniquely adds value is in giving developers the freedom to create database environments on the fly – even if the database in question may take up a few terabytes.

Back in 2011, Troy Hunt (famously the creator of Have I Been Pwned) wrote about The unnecessary evil of the shared development database.

Nearly 10 years later, most developers still lack the ability to create a fresh, realistic development database on the fly – both for local validation, and in their automation pipelines.

We at Redgate really want to fix this situation. Developing databases requires data. The higher quality that data can be in development, the easier it is for developers to create high performing, secure code.

The ability to quickly provision realistic, lightweight databases in automation pipelines also enables teams to understand how long deployments will take and the impact which deployments will make against a production environment.

While general DevOps and automation tooling is fantastic for running pipelines, this tooling lacks the understanding of data – for example, which data should be masked/redacted and how to instantiate large datasets quickly. As a vendor who focuses on databases, this is the fourth area where we offer unique technical value.

Further learning

If you’d like to read more on this topic, we’ve got loads of good options for you. Here are a few places to start:

Tools in this post