Database Delivery with Docker and SQL Change Automation
Phil Factor demonstrates how to integrate SQL Change Automation into containerized workflows, such as are typical of a microservices architecture. He shows how to automate database builds into a Linux SQL Server container running on Windows, and then backup the containerized database and restore it into dedicated containerized development copies for each developer and tester.
Docker containers can, like SQL Clone, provide a useful way of propagating databases for development and testing work. The two approaches are very different, though. Containers represent a ‘pull’ service. The users can pick the container running the right version of SQL Server, and load databases into them. Containers each have their own copy of the database, so if ten developers each work on a containerized database, we have ten copies of the data. SQL Clone, by contrast, distributes database copies (clones) from a central console, and each clone shares the actual database ‘image’, so we still only need one full copy of the data.
SQL Clone is more generally useful for maintaining individual development databases and test cells and keeping all copies of the database at the same level, and properly managed. However, Docker’s canteen-style delivery of a SQL Server instance, pre-configured exactly as required, is often convenient for ad-hoc database development tasks where you need to build, test and teardown a database. An obvious use for Docker images of SQL Server is to run up a working database of a previous or experimental version from a backup, quickly, maybe to test it, and then dispose of it. We’ll be doing that in this article.
I’ll then show how to use SQL Change Automation (SCA) to synchronize an empty copy of a development database, in a Docker container, with the latest build in source control, and fill it with data ready for testing. Finally, we’ll do a backup of the containerized database, so that we can restore it into each developer’s local container. These techniques, combined with ‘glue scripts’, can be used for supporting continuous delivery of databases.
As in most of my previous Product Learning articles, I’ll take an automated approach, because it is much faster and eliminates simple errors. It is also a ‘DevOps’ approach that allows faster, more frequent releases and eliminates much of the tedium of database deployment.
Creating a SQL Server container, stocked with any databases you need
In Running Linux SQL Server as a Container, I explained enough of the principles behind the SQL Server Docker image to get you started. We’ll continue this approach, because this is a technology you need to play with and build confidence slowly. This is a conventional Linux container, and the database will work in the same way as other Linux-based relational databases. The Docker for Windows application manages the Virtual Machine in which the containers run. The SQL Server instance communicates with the outside world in at least four possible ways:
- By a port address that you specify (for ODBC, SMO, SSMS and other Windows-based SQL Server tools)
- By running the core Linux SQL Server tools of
sqlcmd
andbcp
(lowercase, please) viaDocker
Exec
. - By opening a bash shell and running the Linux SQL Server tools
- By running
mssql-cli
, adding it to the SQL Server container to create a new one, and then usingDocker
Exec
into the new container
We’ll just use the second method because it cuts down on dependencies and keeps things neat. I present the script in its stripped-down form to allow you to refine it, check out the parts and walk through it. Once you have it working the way you want, it would best turn into a function. I have introduced one or two checks that are there to show how they can be done but aren’t entirely necessary. The ContainerID
, for example is fetched from the container because some routines still require it in Docker. In the script, I scan through to spot errors, but some informational messages are worth checking; it always pays to check the logs.
With Docker containers, tests are rather more complicated because you must use SQL Server security, with ID and password. The SQL Server instance name has a port number applied, to differentiate different containerized instances. If you need anything in the way of features or configuration other than what comes with the standard SQL Server image, you must create a new image from the image of the SQL Server version you choose, but we won’t go into that in this article.
In the final part of the script, I restore one or more backups the new containerized instance. If you don’t specify any, then that’s fine; there is less work for the script to do!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
<#---- these parameters need to be changed for your requirements ----#> $VerbosePreference = 'Continue' # for debugging! ## Create an instance via docker $InstanceName = 'MyInstance' #the name of the sql server instance you will create $password = 'MyPassword' #the SA password which must conform to policy # The 'sa' password has a minimum complexity requirement (8 characters, uppercase, # lowercase, alphanumerical and/or non-alphanumerical) #https://docs.microsoft.com/en-us/sql/relational-databases/security/password-policy?view=sql-server-2017 # otherwise it doesn't install completely. $Timezone = 'Europe/London' #The Timezone for the database $PortToUse = '1436' #The port that the containers port is mapped to $BackupNames = @(@{ BackupName = 'WideWorldImporters-Full.bak'; DatabaseName = 'WideWorldImporters' }, @{ BackupName = 'AdventureWorks2016.bak'; DatabaseName = 'AdventureWorks' }) #The database backup names $BackupPath = "C:\Backup\" #The full path to the backup file $ImageName = 'mcr.microsoft.com/mssql/server' #$ImageName = 'mcr.microsoft.com/mssql/server:2019-CTP2.2-ubuntu' <# -------the processing starts here --------#> $LogErrors = @() # initialize variable for any install logfile errors <# Create the shared directory for the container if it doesn't exist ( this is only necessary if you opt for path mapping rather than creating a volume #> if (-not (Test-Path -PathType Container "C:\config\$InstanceName")) { # we create the directory if it doesn't already exist New-Item -ItemType Directory -Force -Path "C:\config\$InstanceName"; } # now run the docker image as a container docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=$password" ` -e "TZ=$timeZone" -p "$($portToUse):1433" --name $InstanceName ` -v "$($InstanceName):/var/opt/mssql" ` --restart unless-stopped ` -h $InstanceName -d "$ImageName" If ((docker ps --filter "name=$InstanceName" --format "{{.Names}}") -ne $InstanceName) { Throw "Failed to create SQL Server instance $InstanceName" } <# On a slow or cpu-throttled laptop you may need a brief sleep to ensure the container is 'alive' before making any calls to it #> Start-Sleep -seconds 2 <# it is still useful to have this container ID as some docker commands require it #> $Containerid = (docker ps --filter "name=$InstanceName" --format "{{.ID}}") <# check the install log for errors. e.g. Was that password OK? #> $LogErrors += docker logs "$InstanceName" | where { $_ -like '*Error *' } If ($LogErrors.Count -gt 0) { Throw "Errors were returned by SQL Server $($InstanceName): $LogErrors" } <# Simple check to make sure the container is running #> $ContainerState = (docker inspect -f "{{.State.Running}}" $CONTAINERID) if ($ContainerState -ne 'true')<# returns a string 'true' if it is running #> { write-error "Docker container $InstanceName 'state.Running is $ContainerState" } $backupNames | <# Now copy the backup files to the container address space #> foreach { write-verbose "copying $($BackupPath)$($_.BackupName) to $($InstanceName)" docker cp "$($BackupPath)$($_.BackupName)" "$($InstanceName):/var/opt/mssql/data" } $Iterations = 3 While ($Iterations -gt 0) { $result = docker exec -it "$InstanceName" /opt/mssql-tools/bin/sqlcmd ` -S localhost -U SA -P "$password" ` -Q "SELECT @@SERVERNAME" if ($result -like '*Error:*') #no exception is triggered { # so if you got an error it means that it won't allow logins yet if ($iterations -gt 0) { write-verbose "the Container $InstanceName is still initializing" Start-Sleep -seconds 10 } } else { $Iterations = -1 } #flag that it was successful } if ($Iterations -eq -1) { write-verbose "$InstanceName responds to queries" } if ($result -ne $InstanceName) #make sure that it reported its instancename { throw "container gave '$result' for ServerName query, not '$InstanceName'" } <# Specify where the database files go. First find out what there are #> $backupNames | foreach{ $BackupName = $_.BackupName $DatabaseName = $_.DatabaseName write-verbose "Restoring $BackupName ..." $FileList = docker exec -it "$InstanceName" /opt/mssql-tools/bin/sqlcmd ` -S 127.0.0.1 -U SA -P "$password" ` -Q "RESTORE FILELISTONLY FROM DISK='/var/opt/mssql/data/$BackupName'" -s "," -W <# Now build the SQL for the restore, specifying reach file #> $SQLCommand = "RESTORE DATABASE $DatabaseName FROM DISK='/var/opt/mssql/data/$BackupName' WITH RECOVERY" #now we must specify where every file should go $SQLCommand += $FileList | Select-object -Skip 2 | convertfrom-string -Delimiter ',' | Select P1, P2 | foreach { ", MOVE '$($_.P1)' TO '/var/opt/mssql/data/$(split-path -leaf $_.P2)'" } <# Now We have the SQL for this backup, we can execute it.#> docker exec -it "$InstanceName" /opt/mssql-tools/bin/sqlcmd ` -S 127.0.0.1 -U SA -P "$password" ` -Q $SQLCommand } |
Listing 1: Run a SQL Server instance in a container, copy and restore backup files
If successful, you’ll see output like the following, and you’ll have a running, containerized SQL Server instance that you can access the database through SSMS:
Your container will be running a standard instance of SQL Server, from the chosen image. This will create a container running a standard instance of SQL Server, from the chosen image. If you want features that aren’t in this image, you would need to create a new image, with the features you need using the docker image build. An example of a docker file to do this is here . We’ll tackle this in more detail in a later article! It is also possible to add features by running a BASH session on an existing script.
Building a database into a container from a script directory
Imagine that you have the latest build and you want to test it. You can, of course, use SQL Server 2017 localdb
to do this, but then you will be limited to what you can test on it. Instead, we’ll create a Docker container hosting whatever version of SQL Server we want and stock it with the latest database build, from source control.
To get a running container, just use Listing 1 without specifying any backups to restore. If you already have an image with the same name, you’d have to stop and remove the previous reincarnation first. Then, in Listing 2 below, we create an empty database in the container, using SMO, and then use SCA’s Sync-DatabaseSchema
cmdlet the empty database with the latest development version of the database, from the Git source code directory. If you are scripting, it is best to use SMO to create the database, because this takes care of generating SQL code for the CREATE
DATABASE
command that is appropriate for the version of SQL Server.
Having built the database, the script will stock the tables with your test datasets, assuming you specify a data directory. There are two quick ways of adding the data, either using native BCP from outside the container, or alternatively with native Linux BCP from within the container. This method, outside the container isn’t as fast as the ‘internal’ way but it is good enough for our purposes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
<#---- these parameters need to be changed for your requirements ----#> $ServerInstance = 'Server,1234'; #ensure the port number is right $DatabaseName = 'MyDatabase' $Uid = 'sa' $password = 'MyPassword' # a path for a scripts folder, created by SQL Compare or from SQL Source Control database repository $scriptDirectory = 'PathToMyGitDirectory\MyDatabase' # a path to a data folder, created a native BCP output of all the tables $DataDirectory='PathToDataDirectory' # a database connection string $MyConnection = "Server=$ServerInstance;Database=$DatabaseName;User Id=$Uid;Password=$password;" <# -------the processing starts here --------#> $Errors = @() $MS = 'Microsoft.SQLServer' $My = "$MS.Management.Smo" $Mc = "$MS.Management.Common" <# ------- import any modules you need --------#> Import-Module SqlChangeAutomation ` -ErrorAction silentlycontinue -ErrorVariable +Errors Import-Module sqlserver ` -DisableNameChecking -ErrorAction silentlycontinue -ErrorVariable +Errors <# ------- Connect to the container --------#> $sqlConnection = new-object System.Data.SqlClient.SqlConnection($MyConnection ) $conn = new-object ("$Mc.ServerConnection")($sqlConnection) $InstanceObject = new-object ("$My.Server") $conn if ($InstanceObject.Version -eq $null) { $errors += "Could not connect to the server $ServerInstance with the credentials" } else { <# -------Kill any existing version of the database --------#> If (@($InstanceObject.Databases | % { $DatabaseName }) -contains $DatabaseName) { $InstanceObject.KillDatabase($DatabaseName) } <# -------Create a new version of the database --------#> $DatabaseObject = New-Object ("$My.Database") ($InstanceObject, $DatabaseName) $DatabaseObject.Create() <# -------Add any database configuration stuff here --------#> if ($DatabaseObject.name -ne $DatabaseName) { $errors += "Can't create the database '$DatabaseName' in '$ServerInstance'" }; } if ($Errors.Count -eq 0) { <# -------Synchronize the empty database as the target --------#> $syncResult = Sync-DatabaseSchema ` -Source $scriptDirectory -Target $MyConnection ` -AbortOnWarningLevel None -SQLCompareOptions 'NoTransactions' ` -ErrorAction silentlycontinue -ErrorVariable +Errors } <# -------Fill the new database with data if required --------#> <# Now we BCP all the table data in. As we are using native mode the utility disables constraints for the table before doing the import #> If (($DataDirectory -ne $null) -and ($Errors.Count -eq 0)) { If ($DataDirectory -ne '') { $DatabaseObject.Tables | Select name, Schema | foreach { $filename = "$($_.schema)_$($_.Name)" -replace '[\\\/\:\.]', '-'; $TheScriptPath = "$($DataDirectory)\$($filename).bcp"; $WhatHappened = BCP "`"$DatabaseName`".`"$($_.Schema)`".`"$($_.Name)`"" ` in "`"$TheScriptPath`"" -q -n -N -E "-U$($Uid)" ` "-P$($Password)" "-S$($ServerInstance)"; if ($WhatHappened -like '*Error*') { throw ("$whatHappened $DestinationServer $DestinationDatabase $filename") }; $result = $InstanceObject.ConnectionContext.ExecuteNonQuery( "EXEC sp_msforeachtable 'ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all'"); } } } <# -------Handle all soft errors here --------#> if ($Errors.Count -gt 0) { $errors | foreach{ Write-Error "$_" } } |
Listing 2: Latest database build from source control, in a container
Backing up a containerized database
Building a database and then backing it up will then allow other team members to pull an image of SQL Server and then restore the database into the container. It also makes plenty of test operations possible, particularly if they make changes to the databases.
Backing up a database is an interesting activity because it means writing to the file system within the container. If you make a backup from a container, you will then need to copy it out, whether you are performing this backup by using docker or via SSMS.
The PowerShell script in Listing 3 copies the resulting file into the host directory you specify.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
<# -------Backup a named database --------#> <#---- these parameters need to be changed for your requirements ----#> $InstanceName = 'MyInstance' #the name of the sql server instance you will create $password = 'MyPassword' #the SA password which must conform to policy $DatabaseName = 'MyDatabase' $Destination = 'c:\Backup' <# -------the processing starts here --------#> $SQLCommand = "BACKUP DATABASE $DatabaseName TO DISK = '/var/opt/mssql/data/$DatabaseName.bak' WITH FORMAT;" docker exec -it "$InstanceName" /opt/mssql-tools/bin/sqlcmd ` -S 127.0.0.1 -U SA -P "$password" ` -Q $SQLCommand <# -------Copy from the container to the host file system --------#> docker cp "$($InstanceName):/var/opt/mssql/data/$DatabaseName.bak" "$Destination" |
Listing 3: Backup a database in a container
Conclusions
In this article, I’ve shown the basics of how one can use containers to automate a few common database development tasks. I haven’t given you an entire worked script, as these tend to be rather intimidating, but it should be very easy to combine these components to create a scripted build and test process. I would keep a shared backup directory that I’d fill with the latest version of the databases required for the application and build all the test or development containers on the destination machines, taking the backups from the shared directory.
Docker containers are very useful, but they are most useful for simple database applications. I don’t mind about having to use SQL Server Authentication, and can live with the complications of a foreign filesystem in a VM and having to specify a container instance by its port address. It takes more head-scratching, though, to obtain an image with any features out of the ordinary, and I haven’t dared go into that sort of complexity of creating a custom image in this article.
The most pressing concern, though, is that you are duplicating the data rather than sharing it. SQL Clone (the virtual cloning tool in SQL Provision), by contrast, is a database-level ‘push’ service, delivering database copies to every user’s machine, and is and is a much more effective solution than using containers if you need to distribute, and then keep refreshed, multiple copies of a large source database.
A more realistic use for containers during test and development, when working with larger databases, is to use them to distribute each new database build, stocked with modest volumes of synthetic test data. Containers certainly take more server resources than does SQL Clone, because containers duplicate an entire server rather than reference a shared VHD, as with SQL Clone. However, I reckon that both technologies have their place in speeding the delivery process.