Integrating SQL Server Tools into SQL Change Automation Deployments

Phil Factor shows how to integrate use of SQL Change Automation, SSMS registered servers, SMO, and BCP to automatically build or update a database on all servers in a group.

Guest post

This is a guest post from Phil Factor. Phil Factor (real name withheld to protect the guilty), aka Database Mole, has 30 years of experience with database-intensive applications.

Despite having once been shouted at by a furious Bill Gates at an exhibition in the early 1980s, he has remained resolutely anonymous throughout his career.

He is a regular contributor to Simple Talk and SQLServerCentral.

When doing repetitive database work with SQL Change Automation (SCA) or SQL Compare, we often need to use other tools at the same time such as the registered servers in SQL Server Management Studio (SSMS), SQLCMD and BCP. I also tend to use the SQL Server PowerShell module, sqlserver (formerly known as sqlps). This uses Server Management Objects (SMO), which is Nature’s Way of interacting with SQL Server and uses the same .NET library that underlies SSMS.

If you do so, you’ll want to integrate all these tools as much as possible and, when you’re scripting with PowerShell, to use the same database connections as you are using with SCA. This article is all about how you do that. We’ll show how you can start integrating SCA scripts with SSMS into a single process. We’ll also learn to stop fearing the connection string and view it as an ally. We will use one to create a SMO connection, via a serverConnection object, and borrow that same connection to execute BCP and execute a SQL Command.

The project

We have carefully saved any work we need to in Source Control, using whatever tool we prefer, and now want to deploy the latest version to all our development servers. This means we need to build a new database at that version, if one doesn’t exist, or else update the existing copy to the new version.

Establishing the database source

In this article, I don’t assume use of the Development component of SCA, where the resulting database source will essentially be a set of migration scripts. I use only SCA Deployment, specifically the PowerShell cmdlets, with the source in this case being a set of object-level build scripts that represent the current state of the database version we want to build.

In SSMS, we create a new Local Server Group called Development and register within it any development servers that need a copy.

We can now run a script that builds our database and then ‘releases’ it to all our development servers. We’ll add data as well because we then need to be able to run unit tests and integration tests. We don’t just take production data because we need to compare performance figures from previous runs and check results against ones that we know are correct. This is just part of our daily integration run to ensure that the dev servers are all up to date with the latest changes.

When all this is set up, we can simply right click on Development (or Test) and run a PowerShell script in the resulting PowerShell window that either creates or updates a database on each server in that server group.

All the SQL Server instances in the group have now been updated with the version in source control. We have done it without having any IDs and passwords in the script itself or having to encrypt them in files within the user area. Also, it has taken the list of servers from our development group of servers which we can easily change.

Why do we need to use other tools in a script?

Surely, SCA gives you everything you could possibly need? Well, no, it is not a general interface into SQL Server, such as SSMS or VS. It merely provides add-ins to these tools, and several others, to assist with database development and deployment.

When I’m working on a database, I create SQL scripts in SSMS, save them to a Scripts directory that represents the build, run my unit tests and then run the script that will then build the database on all the development and test servers. This requires PowerShell or DOS scripting with a toolchain, choosing the best tools for each job. I’m likely to use SMO to create the database if required, SCA to build it, BCP to blast in the data, and so on. Routine tasks always get scripted away to save me time for the interesting aspects of work.

In this example, I want to keep the list of servers and, where necessary their credentials, in SSMS so that I don’t then need to change the build script whenever I change the target instance. The Registered Server feature is ideal for this, but you do need to close SSMS before any changes become visible to PowerShell/SMO!

The script needs to create the database on the server if the database doesn’t exist already. If I’m using PowerShell, then the sqlserver module does all that. I need to run a whole lot of checks that are outside the remit of SCA. Sometimes, I even need to stock the data in a built database using a command-line tool such as BCP. I don’t want IDs or passwords in scripts as literals, certainly nothing stored anywhere in plain sight.

The Connection String

Connection strings are extraordinarily clever and contain everything you need to make a connection to a data source. They are in a structured format, so you can create, access and manipulate them as easily as you can a JSON string. They can pass on a lot of parameters that influence the way that a connection is made. A connection string specifies the server and database as well as the authentication methods and credentials. It also allows you to specify a lot more besides, such as the type of security used, the timeout, packet size encryption, language and so on.

You might shrug and say that connection strings are a thing of the past now that you use Windows Authentication. However, even now the ability to specify the user and password is handy, even if you generally only access SQL Servers within the domain. If you haven’t the problem of mixed methods of authentication, then lucky you. At the time of writing, it is impossible to use Windows Authentication with SQL Server instances hosted in Docker in a Linux VM. It is possible to use Azure Active Directory to connect to Azure but not all of us are using that.

The easiest way to create a connection string is to use an online tool or to look it up in a library site. You can do it in PowerShell too, using the DbConnectionStringBuilder or SqlConnectionStringBuilder classes. The joy of using the .NET classes is that it makes it easy to change an existing string. We’ll show an example in the script where we must add a database to an existing connection string. You can also extract any value from them too, which is useful for a tool that doesn’t understand connection strings such as BCP.exe or SQLCMD.exe.

In short, if you can use a connection string for any server or database, then it makes scripting far easier because all the relevant details of a database connection are held together in what amounts to a structured document.

Deploy the latest versions of a database to all servers in an SSMS group

In the routine that follows, we have two pipelines. Most of the work is done by a pipeline that processes each SQL Server instance in turn, either building or updating the database that you specify. If a new database is created, it is remembered and in the second pipeline, we fill it with data. In other words, the second pipeline fills just the databases that you create, rather than any you update.

The list of servers, along with their connection strings, is provided by the local registered servers. This script is designed to be usable either from within SSMS or, if you wish, from Powershell IDE, or Windows Scheduler.

I’ve configured the script, so it detects whether you are running it from the Server Group in SSMS and changes to that location as default. So, when launching a deployment from within SSMS, it will simply use the server group from which the PowerShell session is invoked. You can, for example, deploy the new version to all the servers in your Test group, simply by right-clicking on that folder and selecting Start PowerShell. When the PowerShell window appears, you invoke the script within it like this:

If you run the script outside SSMS, then it uses the location of the Development group, by default, but you can change that to whatever is appropriate.

All you need t do before running it is specify the full location of the source control directory (I’ve provided the source for the pubs database to get you started), as well as the full name of the database you are building. Listing 1 shows the PowerShell for the first pipeline.

We can, of course, now stock the database with data as part of the routine, though this makes sense only if we are creating it from scratch. We therefore check whether we created the database and save the server object if we did so. That allows us to then stock the database with data.

For the attached BigPubsData.zip, I used SQL Data Generator to fill the pubs database with lots of development data then outputted it to create a set of BCP native-format data files, as demonstrated in Scripting out SQL Server Data as Insert statements via PowerShell.

We saved the server object, so we can borrow its connection to pass the credentials to BCP. However, BCP can’t use a connection string, sadly, which is no problem if you are using Windows security, but otherwise we must use SQL Server credentials, which we extract from the username and password.

Conclusions

In this article, I’ve illustrated how to integrate other tools with SQL Change Automation. We’ve taken SMO (in the form of the sqlserver module), BCP and SSMS as examples. I hope I’ve demonstrated that, because SCA uses connection strings, it makes it very easy to integrate SCA scripting with any other process running in Windows Servers. The same goes for any command-line tool, ODBC connection or PowerShell tool: it is easy. This means that integration with other tools, reporting and so on aren’t likely to pose problems.

Tools in this post

SQL Change Automation

Automate your database changes with CI and automated deployment

Find out more