DevOps 101: Unlocking the value of frequent deployments

Database deployments

In this DevOps 101 series, I introduce the concept of DevOps and talk about how you can include the database as a natural partner. In the previous post in the series, I discussed how automation introduces faster and more frequent deployments as a key benefit.

We’re now going to take a deep dive into the value you can unlock through frequent deployments using database DevOps, along with how you get started doing them.

The benefits of frequent database deployments

Improved code quality

By deploying more frequently you’re going to see a higher quality of code. I can say that without equivocation because in order to get code out the door faster, and successfully, you need to have better code. You can’t do one without the other. You’re investing time implementing the mechanisms to validate that code, and having the processes in place to ensure the quality of code going out the door is superior, so that you can deploy more frequently. This is almost a chicken and egg type of situation.

Better error detection

Frequent deployments mean that you’re going to see better error detection. You’re deploying in smaller chunks, so it’s easier to validate the code through testing. By deploying smaller scripts and sets of changes, you’ve got better code going out the door, and better error trapping. The beauty of this is that it feeds itself; if you’re deploying quicker, you’re deploying with better error detection.

Faster deployments

By faster deployments, I literally mean physically faster deployments. In order to release faster and more frequently, you’re going to be deploying smaller sets of changes. With smaller sets of changes, you may have a change going out to one table that’s ten terabytes in size, and that one change is going to be somewhat slow. But because you’re not doing 1,000 different kinds of changes to large sets of data, this makes your deployments inherently faster. This won’t necessarily increase your frequency of delivery, but the physical delivery time is going to be faster.

Quicker implementation of change

Delivering value to your end users is going to happen faster because you can implement change at the same speed as the business. Let’s face it, the business can change its mind a lot faster than you can change technology, that’s just the nature of the beast. So, you need to have mechanisms in place that will allow you to move as fast as possible to keep up with business needs, and that’s the whole idea of frequent deployments. Deploying quickly ensures that you’re able to change quickly.

More protection for production

Everything we’ve just covered adds up to more protection for your production environment. A deployment is less likely to break things or lose data because you’re running smaller deployments that are easier to manage, with higher code quality and more error trapping. Yes, you’re moving faster, and should focus on moving faster, but by doing this you’re also adding protection to your production environment. A lot of people freak out with this concept. They think you can’t do this quickly because it’s dangerous, but by making it quicker, you make it safer. That’s the goal here: get those protections in place so that we can move quickly and safely.

How do you get started with database deployments?

There are four key stages to improving your development practices and we’re going to look at each of these in detail.

1. Source control

Our next DevOps 101 session on September 14 is going to focus solely on the benefits of source control, so I’m not going to dig too deeply into it here. However, as it forms the basis for the rest of our changes, I do need to cover the basics.

First up, source control creates a single source of truth. Now I know some data people are going to disagree and say that production is the single source of truth, but the problem you have with that is code. Code is inflight, and you need a way to measure and deal with the inflight nature of code. That mechanism is source control. It gives you a whole bunch of additional functionalities including labeling, checkpoints and branches, that lets you better define what truth is. It allows you to see what you’ve got in terms of code inflight, versus code in production, versus code that’s in QA.

That single source of truth is source control. It takes a bit of work, but it’s doable and it has some additional benefits. Once you have code in source control, you get auditing visibility of things like who made which change and when. You also get simple change tracking so can see the changes as they’re occurring, and again, who did it, when it occurred, and better still, where it is.

Finally, the big thing for source control is that it’s your automation source, and you’ll always go back to it to pull the correct set of code. So that’s your first step: get your data code into source control to have a single source of truth, and a source for automation.

2. Continuous integration

Once you make a code change, you can test that code using continuous integration. You don’t just test that code individually as a unit test though, it’s integrated with everything else you’re doing. Continuous integration can be done any number of ways, and there is not one perfect mechanism. You can set it up so that it occurs every time you commit a change to source control, but you may want to do slightly less frequent deployments to account for the fact that you’ve got data.

If you’re testing without data, you can go a lot faster, but your tests are less thorough. You’ve got to find the balancing act. Do you exclude data, include a little data, or maybe a lot of data? You’ll make that determination based on your situation, but in summary, you make changes, they get tested, and it’s fully automated.

Continuous integration should be introduced early in the process, as you want to fail fast. The whole idea is that you immediately find out that a piece of code won’t work with others, or that a piece of code causes the application to break. You want to know this the moment you’ve written a piece of code because it makes troubleshooting easier. Testing early means that if it fails, you immediately get notified and, because it’s still in your head, you know what the code is and what changes you made. As a result, you’re going to be more efficient at getting those changes fixed and that’s a big part of it.

Also, you have to remember that continuous integration is an integration point that could cross teams and projects. Let’s say you’ve got three teams working on the same database for different applications or different streams of changes. You can integrate the testing into one location. You’ve got multiple projects, doing multiple changes, but you’ve got an integration point where you can do these tests, and validations.

At this point I want to advocate strongly that if you’re just getting started, and this is your first foray into the concept of database DevOps, you should stop at continuous integration. If you’re introducing source control and continuous integration, you’ll have successfully added automation. Your dev or database teams can get code into and out of source control correctly, and they’re able to do a continuous integration build. Once you’ve got all that done, then you start the next two steps.

3. Iterative development

Iterative development means doing lots of small changes. It’s not something that is necessarily intuitive, and it takes practice. You may be used to sitting down and starting work on something and, if it takes three weeks, well it takes three weeks. Iterative development looks for a way to break this down. Can you deliver part of it in week one, part in week two? Can you deliver part of it in three days? Getting smaller change sets and building in smaller pieces is a discipline and, like any discipline, it takes time and you have to practice.

It’s often referred to as the Agile approach, but I shied away from saying Agile development, because this can carry a lot of baggage. I am a fan of Agile development but I don’t like the idea of following the Principles of the Agile Manifesto where, if you violate them, you’re a bad human being.

What you need is iterative development along the Agile lines to make small sets of changes frequently. You also have to think ahead to ensure that you know the changes you’re introducing are going to function over time.

Best of all, iterative development doesn’t require compromising on safety. In fact, it’s going to enhance it. I know I talk about safety a lot, but if you’ve been a DBA and you’ve been woken up at 3am on a Saturday, after you’ve had a couple of beers, to solve a problem, safety becomes really important. You want to make sure that your production systems are protected, online and available at all times. There’s nothing wrong with talking about safety as part of iterative development, but the big key is that you need to practice it.

4. Continuous delivery

Continuous delivery is, strictly, automation between environments. The basic concept is that it’s your ability to take the stuff that you’ve done in source control, in continuous integration and in iterative development, and continuously deliver those changes across multiple environments. To go from your continuous integration environment into testing, from testing into pre-production or staging, and then from there into your production servers. That’s the process of continuous delivery.

As you’re doing faster and more frequent deployments, you can do faster testing – and more testing. You also get to do that testing easier because it now takes only minutes or hours to do a deployment as opposed to days. Think of automation is your friend. You’re going to be able to do faster and more frequent deployments because you’ve automated them through your continuous delivery process.

You can also build audits, checks and validations into your process as a review stage, but you’ll need to consider where this fits so as not to cause a bottleneck. Your process can still be fully automated up to this point, and the review process itself can be automated in terms of what you are reviewing. There’s nothing wrong with doing these validations and checks. However, if you say every time there’s a deployment, you need three days to look at the code, now you’re a bottleneck.

The whole idea is by introducing automation up to and into your production environments, you enhance the protection of those production environments. You’re showing that scripts don’t run for the first time in production, they are tested elsewhere first, and that’s an enhanced set of production protections which is good news for your production machine.

Summary

You can’t move quickly and have bad code. If you’ve got bad code it will fail, and then you’re not moving quickly because you’re constantly fixing things. To resolve that, we’ve talked about the patterns that lead to more frequent deployments, starting with source control and continuous integration, then moving into iterative development and continuous delivery.

Frequent database deployments have benefits, and the benefits are very clear. If you’re moving quicker, you’re going to deliver more functionality which the business is going to see as the primary benefit. But for me it’s safety, and the fact is, you can do both. You can move quicker in a safer fashion.

If you haven’t read them yet, you can catch up on the first five posts in this DevOps 101 series: