Extending DevOps practices to SQL Server databases – DevOps webinar Q&A

Even with the right tooling, it can be tricky figuring out how to get an automated database deployment pipeline up and running. That’s why each month we host an hour-long DevOps webinar with an end-to-end demo of our database automation tools, showing how to check a database change into version control, automate the build and testing of that change, and then deploy the new version of the database into different environments.

It gives attendees a practical insight into the processes involved and provides the opportunity to ask questions as we go along.

Here are the questions we asked in March’s webinar, along with the answers given by our hosts, Microsoft Data Platform MVP, Steve Jones, and Redgate Pre-Sales Engineer, Arneh Eskandari. If you prefer watching rather than reading, you can also view a recording of the webinar.

How do you keep large 200MB+ configuration data, and not just schema?

DevOps webinar 1
SQL Source Control includes the ability to capture data as a part of the development process, so configuration, reference, lookup data, etc, are a part of your deployment. During the webinar we also cover static data, or data that’s the same in Dev, QA and Production environments.

Will this work with SQL Azure databases?

DevOps webinar 3
Yes, although there are a few limitations to be aware of in terms of object types that Microsoft SQL Server databases support. See our documentation for more information.

Does this work for Oracle?


We have a separate deployment suite for Oracle.

You mention TFS – how do you collaborate with VSTS? Its advanced CI and automation features
are far better than traditional TFS, so do you have a partnering strategy?

If you have < TFS2015 Update 2, then I’d argue TFS isn’t great. After Update 2, VSTS and TFS are the same. VSTS right now is TFS 2017, and I agree, it’s fantastic. We have our DLM items in the marketplace, and we integrate with those, or with straight PoSh that you write as a task inside of VSTS. We don’t work with the hosted agents, since those don’t have SQL, but if you have local agents, then this works fine.

If you needed to roll back that change how would that work?


Pre-commit, we have rollbacks in the Source Control product to undo changes. If you have code (procs, views, functions), you can recommit the older version and deploy that. We don’t auto-build rollbacks because the challenge in choosing rollbacks is whether you’re fundamentally breaking the logic. For tables, rollbacks need to be custom. We don’t have a good way to do this, since you might need to save data or do something else. Also, table structures can be huge, and what might work for a 1MB table won’t work for a 100GB table.

I know code tests but I am not familiar with the database tests.


The concept is similar. There are frameworks that handle data specific items. tsqlt.org>/a> is a good place to start, and there’s some content on SQLServerCentral.

Is it a requirement to have Octopus Deploy work with Redgate CI/RM tools?


No, we integrate with any CI tool, such as Hudson, Jenkins, TeamCity, TFS/VSTS, CruiseControl, Bamboo, etc.

Does SQL Source Control work with all SQL versions and all SQL editions?


SQL Source Control works with SQL Server 2008 onwards. You can find full details on our documentation pages.

We have a web application with a database backend. We have VSTS with Octopus Deploy. Build servers have cumulative code changes for the web application with each build/package that runs in a test environment. However, the database changes are always just the single change in each build/package. For the test environment, this is great because each database change is deployed. However, when we deploy to production, we don’t have a single package with all database changes. How can we deploy a cumulative database upgrade script in a single package using DLM Automation?

All changes get accumulated here. The CI build is incremental, just the new change, but since we maintain state in the database, when we show the first step in Octopus, it actually builds the package of changes based on the snapshot of CI output (cumulative database changes) and the target database (production, QA, etc). In this way, we end up with a valid change script. If the target changes, we have checks pre-deployment. So if you build a package and it sits in review for a day but production changes overnight, then the deployment will generate a warning or failure because the target has changed and the script might not be valid.

Is the manual check a diff view or is it a case of looking through all the scripts?


We have both. The ‘Script’ at the top shows the script. The ‘Changes’ at the top shows more of a diff view.

How can you get back to a previous version of a database?


We support roll forward. You’d recommit the previous version of changes and roll them through. Keep in mind there’s no magic. How do you roll back now? There is no such thing as a table rollback. We maintain state, so you always roll forward.

What SQL Server permissions are needed to run the deployment on
the production server/destination server that is being deployed to?

The same permissions you’d need to run manually. These are normal SQL executions, so we only require what’s necessary for the particular DDL/DML changes you’re deploying.

Can we start using these tools with existing databases or do we need to start from scratch?


Either.

Are or can deployment failure messages be pushed to a distribution list
or does the dev have to monitor this?

It depends on your release software. Bamboo, TeamCity, Jenkins, VSTS, and TFS allow notifications in different ways via email, pager, logging, etc, and you can configure either group or individual notifications.

Will this work if we do not currently have a QA box? Being
a small shop, we have development and production only.

Yes, it works but we’d recommend at least three environments (Dev, QA, and Prod). You make changes in Dev, but you want to test making those changes. In development, you make one, undo it, make another, etc. We want a place to test the actual movement of the changes. I actually want at least four. Development (for each developer), Integration (all developers combined), QA, and Production. Since these are databases, Dev, Integration, and QA can be on the same SQL instance. Production is separate.

How can this work if you deploy changes out of order? For example, feature A may go from Testing to UAT to Production while feature B may go directly from Testing to Production. And feature C may go from Testing to UAT, but then die and never go to Production.

You can skip environments. The timing of changes and deployments depends on how you’ve handled things. If you want to separate features out, branching can help here. The idea should be that you develop stuff in branches and use pull requests to merge back to the main branch that mimics Production. If you decide a change is wrong, you’d commit the ‘undo’ back in development for C (undo C) and push that forward.

Can this be used to deploy SSIS packages or SSRS RDL files?


Not really. The ideas are the same, but we don’t currently have tooling that works well here.

Is SQLite supported on Android / iOS Platforms?


No, just SQL Server.

is there any way to review the database upgrade change files before the manual
intervention step? DBAs need to see that script in advance of the deployment.

The VCS is really the place here. We have customers that use branches for developers, and DBAs review the merge to be aware of changes. You can also use testing to catch potentially risky changes and notify the DBA. You could implement a review step in the CI/CD/RM process anywhere. We just show staging, but this could be done in any environment. Just configure in Octopus.

No MySQL or PostgreSQL?


Not currently for PostgreSQL. We have a MySQL comparison bundle and you can also add further product suggestions.

If we have multiple projects that are changing the same database in
the same environment, how can we manage this? With branches?

Branching and communication. Ultimately, if you have multiple people changing the same thing, you need to coordinate.

We often restore our production database to lower environments after we deploy changes, basically to reset any garbage data that was in development. Would this cause any problems for those lower environments tracking changes going forward since you’ve basically reset everything?

Actually I recommend this. I’d really say that you want a known set of data for developers. They should be able to reset any time, reapply changes from VCS, and then move forward. In our case, no real issues for the tools. Just follow a flow that makes sense with understanding the changes.

Can you take a particular copy of several databases from one server
(i.e. Dev) and deploy all of these to a new Dev or Test server?

We work at the database level. All the work in the DLM Automation tools is performed with PowerShell, so you can string together multiple calls to manage multiple databases. We don’t sync changes across databases, but you can do that by storing each database in a repo subfolder and committing all changes together. Then performing a build and release from the repo to each database separately.

Using Octopus Deploy, can DBAs edit the change script during review
at the manual intervention step?

Ultimately the idea is that you don’t make changes in the pipeline because that introduces manual error. If you find an issue, say in QA, you should correct this in a development area, ensuring the change is then committed to the VCS and moves through the pipeline.

If there are errors in the pipeline, we don’t want to fix them there, because we might lose the change for the next deployment. Or it would need to be changed again in a subsequent deployment to another downstream environment. We don’t expect every change to be correct, and some will cause errors and need to be abandoned, with corrections and fixes performed in development.

How is this continuous delivery, eg, not causing application downtime? Where is the part about scaffolding the old schema such that pending transactions can flow across, and when the new application is flipped on, all of the old (and pending data) is ready to go in the new schema?

Continuous delivery doesn’t mean the application won’t have downtime. It means we don’t schedule long periods of downtime for a deployment. In a relational database, there will be times a schema is unavailable. That’s the nature of making certain changes to a database. There are techniques to minimize downtimes in relational work, but there is no way to eliminate the blocking that can occur when certain relational schemas occur.

Unlike applications, the database maintains state. Some companies that avoid most downtime, like Amazon, use messaging so that the delays in database changes are masked by queueing in a different system. Some people use copies of tables, and they’ll take old data there, creating a new schema, and then copying over data, performing the work that is needed to merge data together.

There are other techniques, but these are complex changes which the application and database team have worked out. For most applications, there isn’t sufficient load to worry about re-architecting the application to avoid minimal blocking from changes such as adding not null columns or splitting tables.

To join the next in our series of monthly demo DevOps webinars, and have your own questions answered, register online. You can also download the SQL Toolbelt to try out SQL Source Control and DLM Automation for yourself on a free 14-day trial.