16 February 2017

Guest post

This is a guest post from DevOpsGroup. DevOpsGroup deliver IT transformation at the speed of disruption, by building DevOps capabilities within our clients, enabling them to meet the relentlessly increasing demand of delivering great digital customer experiences.

As a next-generation digital business with hands-on experience at enterprise scale, our services enable clients to stay ahead of the technology curve, empower transformation leaders and unlock business agility.

Within the DevOps space, we’re widely regarded as thought leaders and have been quoted in research by Gartner, Forrester, and Microsoft. Our broad client portfolio includes work with multi-national companies such as Admiral Insurance, ASOS, BAE Systems, Skyscanner, and Vodafone.

We hold top-tier accreditations with all the major vendors of DevOps automation software, including Atlassian, Ansible, Red Hat, AWS, Octopus, Redgate, AppDynamics, and Microsoft. We’re also skilled in open-source tooling such as Git, Chef, Puppet, Jenkins, ELK, and Docker.

Underpinned by agile and lean methodologies, our digital transformation initiatives can boost customer satisfaction and revenue, as well as enable innovation in order to drive competitive advantage.

16 February 2017

DevOps – a DBA’s perspective

Guest post

This is a guest post from DevOpsGroup. DevOpsGroup deliver IT transformation at the speed of disruption, by building DevOps capabilities within our clients, enabling them to meet the relentlessly increasing demand of delivering great digital customer experiences.

As a next-generation digital business with hands-on experience at enterprise scale, our services enable clients to stay ahead of the technology curve, empower transformation leaders and unlock business agility.

Within the DevOps space, we’re widely regarded as thought leaders and have been quoted in research by Gartner, Forrester, and Microsoft. Our broad client portfolio includes work with multi-national companies such as Admiral Insurance, ASOS, BAE Systems, Skyscanner, and Vodafone.

We hold top-tier accreditations with all the major vendors of DevOps automation software, including Atlassian, Ansible, Red Hat, AWS, Octopus, Redgate, AppDynamics, and Microsoft. We’re also skilled in open-source tooling such as Git, Chef, Puppet, Jenkins, ELK, and Docker.

Underpinned by agile and lean methodologies, our digital transformation initiatives can boost customer satisfaction and revenue, as well as enable innovation in order to drive competitive advantage.

Earlier this year, we ran a DevOps 101 webinar in conjunction with Redgate to a predominately DBA audience. In this blog post our own DBA, Paul Ibison, gives his views on what DevOps means from a DBA perspective. Note that Paul is a SQL Server DBA, so some of the comments are specific to SQL Server although the overall approach should work for any database platform.

In a DevOps pipeline setup, the methodology is sometimes to treat the database as an isolated ‘black box’ and, in that way, we deploy changes to it in the same manner as the normal packaged code deployments. To achieve this, the process may look something like:

Step 1: We compare the current state of the database to a baseline state using a tool to create a comparison file.

a. The tool will look at the database schema and create a comparison file (DACPAC, NUGET, etc). Using a comparison-based approach may cause issues which result in data loss, like splitting a single column into two new columns. However, some tools like those from Redgate allow you to mitigate this without resorting to manual deployments.

b. The tool may or may not compare some reference data and generate change scripts for this as well. It all depends on the requirements. Comparison of the data held in user tables is sometimes a little tricky as the key values might be different across environments, but it is certainly possible.

c. We create scripts using unit test frameworks such as tSQLt to test the code element of the database such as the stored procedures and functions, etc

Step 2: The database comparison files are published with the other elements of the software release. These files should be checked into an artefact repository at this stage to ensure we always have a record of what was changed with that release. Your base schema and other database code should be in source control too.

Step 3: The database package is then deployed automatically and updates the database, ideally using the same or similar deployment methods to the code.

In the ideal deployment pipeline scenario, it really is this straightforward with databases.

I don’t want to trivialize the role of the DBA (especially as I’m one), but in this instance the DBA isn’t really involved in the deployment process in a significant way. This of course leaves them free to concentrate on the (very important) core operational tasks – backups, re-indexing, high-availability, security, optimization, troubleshooting, and so on.

But what if the deployment pipeline scenario isn’t ideal? Problems can arise when the database is dependent on other bits of what we can call ‘peripheral architecture’. These extra ‘bits’ could exist in the current database, another user database or one of the system databases or even further afield. To not get overwhelmed with all these variations, I like to separate this peripheral architecture into two classes:

1. Basic configurations
Many of these come from the system databases. From the master database we might include logins, server triggers, linked servers, configurations and encryption (certificates and keys). For the msdb database we’ll have the jobs, operators, alerts, messages, etc. All these are basic configurations in the sense that they are usually a single row in a system table and quite easy to script.

2. Complex configurations
Here, we consider the setup of replication, log-shipping, SSIS packages, availability groups, clustering, mirroring, service broker, database mail, CLR dlls, change tracking, extended events, and so on. These technologies involve entries in many different system tables in system databases and scripting them might involve a fair bit of work.

One thing the basic and complex configurations have in common is that having them in scope as far as the deployment pipeline is concerned means that we no longer treat the system as a simple database-level black box. Instead, we’re beginning to see the black box as the SQL instance itself. This is in terms of the outer boundary of our dependencies, and remember that the deployable artefacts themselves are far more granular than this.

We could of course push things even further and widen the boundary definition to include the server (ever heard of the GPO setting Perform volume maintenance tasks?), and then why not include the DNS/AD servers, etc? But in the interests of being practical, we have to stop somewhere and the SQL instance seems like the best point of demarcation

The other point is that when we add the basic and complex configurations together, we have a list which contains definite dependencies and many of the others are really potential dependencies for our build.

For an example of a definite dependency, we can be sure that a build script for a new user requires the basic configuration of a login. This is solved by having a script that can check for the login during the deployment and create it if it doesn’t exist.

For some other configurations (basic or complex), they could be relevant to our build but we don’t initially know for sure. An SSIS package or SQL job or replication articles might be relevant to parts of our build, for example, but right now they’re potential dependencies.

At this stage, let me say that totally ignoring the list of potential dependencies often works. The build runs and deployments happen and users are happy. That is, until it doesn’t work …

Then questions are asked – why is this job that failed not also being updated and deployed? Why didn’t the SSIS package get updated during the build? Why didn’t you test replication before deploying?

What we need is an automated build process which takes the finger-crossing out of the equation as much as we can. So how do we deal with all these configurations and have a build that always works, or at least get us nearer to this goal? There are some guidelines we can follow:


Make a list of your configurations which are to be considered to be in scope
In particular, examine the list of potential dependencies and clarify which are truly relevant. Are there jobs which refer to user tables? Are the SSIS packages importing into our tables? Are the replicated tables the same as our deployment tables? Are we using SMTP? Are these linked servers being used?

Decide to what extent each environment should aim to have the same configurations
If there is replication in Prod then why not have it in Dev also? Ideally, all environments should be identical so we can thoroughly test the build before deploying to Production, but there may be financial constraints which make this impossible. Clustering is an obvious case – often the cluster exists in Preprod and Prod but not Dev, Test, and UAT due to the expense.

Make a recreate/update decision for each configuration
For a configuration like the database mail setup, this can be done quite nicely, but for some configurations it can be bad practice. We won’t want to lose the job history each time we recreate the jobs, for example. In other cases, it is simply impractical. I doubt anyone would agree to having to reinitialize all those replication subscribers or set up log shipping destinations as a part of each deployment, for example.

For example, do we always recreate or only create if it doesn’t exist, and if it does we update? It’s easiest to drop the extra architecture and recreate each time we deploy. The scripting is easier this way and we ensure the configuration is identical.

Create deployment scripts for the configurations
These scripts will deploy all the peripheral architecture and will most likely be environment-specific. For example, linked servers in Dev are unlikely to be the same as those in Prod, logins are often set up differently in environments by design for security purposes, the SMTP profiles in Dev shouldn’t send emails to Prod support, and so on. Third party tools might help us create some scripts here as a starting point.

Create test scripts for some of the complex configuration
One of the deployment steps checks that replication still works, that the SSIS packages work, that jobs still run, that emails still get sent, and so on. This is a validation – a functional smoke test / integration test – that nothing is broken.


To summarize, we are saying that there are some elements of a database that can be thought of as code and other elements that can be thought of as infrastructure. When we are working with these elements, the complexity comes with preserving data and the need for a more holistic view of the database. While it isn’t simple, there are already well-established ways of managing all this and we just need to implement the process to put it in place.

From a wider DevOps perspective, much of what Paul discusses is related to the First Way of DevOps – Systems Thinking. That is, always looking at the holistic ‘Big Picture’. In this case, looking beyond the simple database-level solutions and seeking to provide a holistic solution that covers all areas of the database platform (replication, log shipping, SSIS packages, availability groups, clustering, mirroring, service broker, database mail, CLR dlls, change tracking, and so on).

To find out more about DevOpsGroup, and how we transform and accelerate the way organizations deliver software, visit us online, or read a selection of DevOps-related stories on our blog.

Share this post.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

Related posts

Also in Blog

Server Estate Management early access program

In Redgate’s R& D Division, Foundry, we’ve been looking into ways to help you manage your server estate more efficiently. Our goal is to provide a single source of truth about what’s happeni...

Also in DevOps

Redgate's journey to DevOps

At Redgate, we research DevOps, write articles, whitepapers and other content about DevOps, and talk a lot about DevOps. We actively encourage our customers to introduce database DevOps too, using our...

Also about Database DevOps

How to stop standardization being a stumbling block in database DevOps

More and more enterprises are seeing the merit of introducing DevOps to speed up application and database development and deliver value into hands of their customers faster. The collaboration and coop...

Also about DevOpsGroup

How DevOps can automate the repayment of technical debt

One of the common things we find in enterprise organizations looking to move to a DevOps model is high levels of technical debt.

To be more accurate, they are caught in a vicious cycle of technical d...