13 May 2019
13 May 2019

Database Delivery with Docker and SQL Change Automation

Phil Factor demonstrates the basics of how to automate database builds into a Linux SQL Server container running on Windows, and then backup the containerized database and restore it into dedicated containerized development copies for each developer and tester.

Guest post

This is a guest post from Phil Factor. Phil Factor (real name withheld to protect the guilty), aka Database Mole, has 30 years of experience with database-intensive applications.

Despite having once been shouted at by a furious Bill Gates at an exhibition in the early 1980s, he has remained resolutely anonymous throughout his career.

He is a regular contributor to Simple Talk and SQLServerCentral.

An obvious use for Docker images of SQL Server is to run up a working database from a backup, quickly, maybe to test it, or possibly to mask the data. We’ll start by doing that in this article. We’ll then use SQL Change Automation (SCA) to synchronize an empty copy of a development database in a Docker container, with the latest build in source control, and fill it with data ready for testing. Finally, we’ll do a backup of the containerized database, so that we can restore it into each developer’s local container . These techniques, combined with ‘glue scripts’, can be used for supporting continuous delivery of databases.

As in most of my previous Product Learning articles, I’ll take an automated approach, because it is much faster and eliminates simple errors. It is also a ‘DevOps’ approach that allows faster, more frequent releases and eliminates much of the tedium of database deployment.

Docker containers offer a very different way of propagating databases to SQL Clone. The latter is a database-level ‘push’ service, delivering database copies to every user’s machine, and is ideal if you need to distribute multiple copies of a large source database. The former is a server-level ‘pull’ service. The user initiates pulling a version of SQL Server and then loads into it a version of a database.

With Docker containers, tests are rather more complicated because you must use SQL Server security, with ID and password. The SQL Server instance name has a port number applied, to differentiate different containerized instances. If you need anything in the way of features or configuration other than what comes with the standard SQL Server image, you must create a new image from the image of the SQL Server version you choose, but we won’t go into that in this article.

Creating a SQL Server container, stocked with any databases you need

In Running Linux SQL Server as a Container, I explained enough of the principles behind the SQL Server Docker image to get you started. We’ll continue this approach, because this is a technology you need to play with and build confidence slowly. This is a conventional Linux container, and the database will work in the same way as other Linux-based relational databases. The Docker for Windows application manages the Virtual Machine in which the containers run. The SQL Server instance communicates with the outside world in at least four possible ways:

  1. By a port address that you specify (for ODBC, SMO, SSMS and other Windows-based SQL Server tools)
  2. By running the core Linux SQL Server tools of sqlcmd and bcp (lowercase, please) via Docker Exec.
  3. By opening a bash shell and running the Linux SQL Server tools
  4. By running mssql-cli, adding it to the SQL Server container to create a new one, and then using Docker Exec into the new container

We’ll just use the second method because it cuts down on dependencies and keeps things neat. We’ll assume that you will want to restore one or more backups into your new instance. If you don’t specify any, then that’s fine; there is less work for the script to do!

Here is the script in its stripped-down form to allow you to refine it, check out the parts and walk through it. Once you have it working the way you want, it would best turn into a function. I have introduced one or two checks that are there to show how they can be done but aren’t entirely necessary. The ContainerID, for example is fetched from the container because some routines still require it in Docker.

In the script, I scan through to spot errors, but some informational messages are worth checking; it always pays to check the logs.

Listing 1: Run a SQL Server instance in a container, copy and restore backup files

If successful, you’ll see output like the following, and you’ll have a running, containerized SQL Server instance that you can access the database through SSMS:

Your container will be running a standard instance of SQL Server, from the chosen image. This will create a container running a standard instance of SQL Server, from the chosen image. If you want features that aren’t in this image, you would need to create a new image, with the features you need using the docker image build. An example of a docker file to do this is here . We’ll tackle this in more detail in a later article! It is also possible to add features by running a BASH session on an existing script.

Building a database into a container from a script directory

Imagine that you have the latest build and you want to test it. You can, of course, use SQL Server 2017 localdb to do this, but then you will be limited to what you can test on it. Instead, we’ll create a Docker container hosting whatever version of SQL Server we want and stock it with the latest database build, from source control.

To get a running container, just use Listing 1 without specifying any backups to restore. If you already have an image with the same name, you’d have to stop and remove the previous reincarnation first. Then, in Listing 2 below, we create an empty database in the container, using SMO, and then use SCA’s Sync-DatabaseSchema cmdlet the empty database with the latest development version of the database, from the Git source code directory. If you are scripting, it is best to use SMO to create the database, because this takes care of generating SQL code for the CREATE DATABASE command that is appropriate for the version of SQL Server.

Having built the database, the script will stock the tables with your test datasets, assuming you specify a data directory. There are two quick ways of adding the data, either using native BCP from outside the container, or alternatively with native Linux BCP from within the container. This method, outside the container isn’t as fast as the ‘internal’ way but it is good enough for our purposes.

Listing 2: Latest database build from source control, in a container

Backing up a containerized database

Building a database and then backing it up will then allow other team members to pull an image of SQL Server and then restore the database into the container. It also makes plenty of test operations possible, particularly if they make changes to the databases.

Backing up a database is an interesting activity because it means writing to the file system within the container. If you make a backup from a container, you will then need to copy it out, whether you are performing this backup by using docker or via SSMS.

The PowerShell script in Listing 3 copies the resulting file into the host directory you specify.

Listing 3: Backup a database in a container

Conclusions

In this article, I’ve shown the basics of how one can use containers to automate a few common database development tasks. I haven’t given you an entire worked script, as these tend to be rather intimidating, but it should be very easy to combine these components to create a scripted build and test process. I would keep a shared backup directory that I’d fill with the latest version of the databases required for the application and build all the test or development containers on the destination machines, taking the backups from the shared directory.

Docker containers are very useful, but they are most useful for simple database applications. I don’t mind about having to use SQL Server Authentication, and can live with the complications of a foreign filesystem in a VM and having to specify a container instance by its port address. It takes more head-scratching, though, to obtain an image with any features out of the ordinary, and I haven’t dared go into that sort of complexity of creating a custom image in this article.

Containers certainly take more server resources than does SQL Clone, because containers duplicate an entire server rather than reference a shared VHD, as with SQL Clone. However, I reckon that both technologies have their place in speeding the delivery process.

Guest post

This is a guest post from Phil Factor. Phil Factor (real name withheld to protect the guilty), aka Database Mole, has 30 years of experience with database-intensive applications.

Despite having once been shouted at by a furious Bill Gates at an exhibition in the early 1980s, he has remained resolutely anonymous throughout his career.

He is a regular contributor to Simple Talk and SQLServerCentral.

Share this post.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

You may also like