Product articles SQL Change Automation Database Builds and Deployments
Recreating Databases from Scratch with…

7 October 2019

Guest post

This is a guest post from Phil Factor. Phil Factor (real name withheld to protect the guilty), aka Database Mole, has 30 years of experience with database-intensive applications.

Despite having once been shouted at by a furious Bill Gates at an exhibition in the early 1980s, he has remained resolutely anonymous throughout his career.

He is a regular contributor to Simple Talk and SQLServerCentral.

7 October 2019

Recreating Databases from Scratch with SQL Change Automation

Phil Factor starts with the basics how to rebuild a set of development database from scratch, using SQL Change Automation, and then demonstrates how to check for any active sessions before rebuilding, import test data using BCP, and secure passwords if connecting to the target with SQL Server credentials.

Guest post

This is a guest post from Phil Factor. Phil Factor (real name withheld to protect the guilty), aka Database Mole, has 30 years of experience with database-intensive applications.

Despite having once been shouted at by a furious Bill Gates at an exhibition in the early 1980s, he has remained resolutely anonymous throughout his career.

He is a regular contributor to Simple Talk and SQLServerCentral.

You are using SQL Change Automation (SCA), or perhaps SQL Compare, to prepare development databases. You have the routine task of updating a list of development databases, on various instances, from the current source of the database. This isn’t exactly rocket science, but it needs to be done often, and so must be entirely automated. You don’t care what is on the existing copies of the databases, so you want to “kill and recreate”. You also want the databases correctly versioned and checked, and you need the code analysis too.

I’ll show how to automate all this, keeping things simple to start with, and ignoring the three elephants chewing meditatively on branches on the corner of the room. These are called ‘development-data’, ‘passwords-in-script’ and ‘database-in-use’. Firstly, development databases are no use without data, secondly, you can’t leave passwords in scripts and, finally, you shouldn’t and would be unlikely to want to, kill a database if it is in use.

Having demonstrated the basic principles of how we get SCA to recreate databases from scratch, I’ll show a final script that deals with the three elephants humanely.

The basic script

We’ll start the script by itemizing the target databases, using a connection string for each. At the same time, we specify the source and the details about the current build and database version.

We delete the database if it currently exists. Then we recreate each of the databases by using an SCA release object. First, we create a build artifact from the source (a single build script, or a set of object build scripts), adding to it the project name, version and description. From that, we create a release artifact. There is always one build artifact per database version, but generally we need a separate release artifact for each target. However, here, we can be certain all targets are identical, so having created it the first time, we simply reuse the same release object on each empty database.

Import-Module SqlChangeAutomation
import-Module sqlserver
<# here you specify the source of the release, a source control directory in our case but it could be a single build script or a connection string to a source database.
You need to specify the project details under 'projects' and then a list of connection strings for each of the targets on which you'd like a clean install
#>
$Databases =
@{
  'source' = 'S:\work\programs\SQL\ScriptsDirectory\Pubs\scripts';
  'Project' = @{ 'name' = 'KillnFill'; 'version' = '1.4.5'; 'description' = 'This demonstrates how to do a clean build' }
  
  'CleanTargets' = @(
    <# list of connection strings for each of the targets on which you'd like a clean install    #>
    'Server=MyInstance;Database=Thomas;User Id=MyUserName;Password= ACunningPassword;Persist Security Info=False',
    'Server=MyInstance;Database=Richard;User Id=MyUserName;Password=ACunningPassword;Persist Security Info=False',
    'Server=MyInstance;Database=Harold;User Id=MyUserName;Password= ACunningPassword;Persist Security Info=False',
    'Server=MyInstance;Database=Ebeneezer;User Id=MyUserName;Password= ACunningPassword;Persist Security Info=False'
  )
}
$ReleaseArtifact = $null; #start off with a null release artifact so you know when to make one
$databases.CleanTargets | foreach {
  $csb = New-Object System.Data.Common.DbConnectionStringBuilder
  $csb.set_ConnectionString($_) #we need to remove the database to connect at server level
  $connectionString = $csb.Remove('database')
  try # now we make a SMO connection to the server, using the modified connection string
  {
    $sqlConnection = new-object System.Data.SqlClient.SqlConnection $csb.ConnectionString
    $conn = new-object Microsoft.SqlServer.Management.Common.ServerConnection $sqlConnection
    $srv = new-object Microsoft.SqlServer.Management.Smo.Server $conn
  }
  catch
  {
    Write-error "Could not connect to SQL Server instance '$csb.server': $($error[0].ToString() +
      $error[0].InvocationInfo.PositionMessage). Script is aborted"
    exit -1
  }
  $csb.set_ConnectionString($_) #restore the connection string as we need the name of the database
  $Db = $csb.'database'
  
  #if the database already exists, then kill it
  If (@($srv.Databases | % { $_.Name }) -contains $csb.'Database') # search for the name
  { $srv.KillDatabase($csb.'Database') } # if it is there then kill it
  # in one of my articles, I show how to write out any changes
  # Now we create the database
  $DestinationDatabaseObject = `
  New-Object Microsoft.SqlServer.Management.Smo.Database ($srv, $csb.'Database')
  $DestinationDatabaseObject.Create()
  # double check that it all worked
  if ($DestinationDatabaseObject.name -ne $csb.Database)
  {
    Write-error "Could not create database   $($csb.'Server').$($csb.'Database')): $($error[0].ToString() +
      $error[0].InvocationInfo.PositionMessage). Script is aborted"
    exit -1
  }
  if ($ReleaseArtifact -eq $null)
  {
    #only do this once since all release artifacts will be the same
    Write-Verbose "Now creating the build object using ( $($csb.'Server').$($csb.'Database'))"
    <# we create a build artifact so we can put in the package version etc.#>
    $buildArtifact = $databases.source |
    New-DatabaseProjectObject | #wrap up the script and create a build artifact
    New-DatabaseBuildArtifact -PackageId $Databases.Project.name -PackageVersion $Databases.Project.version `
                  -PackageDescription $Databases.Project.Description
    Write-verbose "Now creating the release object using ( $($csb.'Server').$($csb.'Database'))"
    
    $ReleaseArtifact = new-DatabaseReleaseArtifact  `
                             -Source $buildArtifact  `
                             -Target $csb.ConnectionString   `
                             -AbortOnWarningLevel None -SQLCompareOptions IgnoreSystemNamedConstraintNames
  }
  #at this point, we will have the release object so we can use it
  Use-DatabaseReleaseArtifact -InputObject $ReleaseArtifact -DeployTo $csb.ConnectionString
}

The elephants in the room

Now, let’s look at the elephants, ‘database-in-use’ and ‘development-data’, and then the baby elephant ‘passwords-in-script’.

Database-in-use

As we have created a connection with the server, it is easy to check whether the database is in use. We’ll simply see if anyone has an open session, and if so, we won’t delete the database. If the number is greater than zero, we can’t deploy the new version of the database, but we want to press on with the other databases because the chances are that only one or two developers are working late or have left a session open in SSMS after packing up for the day.

You can do this in SQL, as follows:

SELECT  Count(*) AS sessions
       FROM 
     sys.dm_exec_sessions B 
  WHERE database_id =Db_Id('MyDatabaseName')

However, SMO has a built-in method of doing this called srv.GetActiveDBConnectionCount($databaseName), so we’ll use that.

One precaution of checking for existing connections is that we mustn’t leave our own connections lying around when we use the PowerShell IDE, so we need to close our own connections. This isn’t necessary with the command line script as it is done automatically by .NET when the PowerShell session ends.

Development-data

I admit that where each database requires a lot of data, it saves a lot of time and space to use SQL Clone for this. You build-and-fill one database, then distribute clones of it to all the development instances. Otherwise, we can just use SCA and store the data in a separate directory as native BCP files. You can do it other ways if you don’t mind waiting longer.

I use a PowerShell script to read BCP data out of a database (see Scripting out SQL Server Data via PowerShell). It must match the metadata of the tables in the databases that you are creating. If your new version of the database has altered the tables, then you will need an initial step of doing a single, initial build to an existing version of the database using SCA, using migration scripts where necessary, and then saving the data from that initial build.

The process of reading the data in is relatively straightforward using command-line BCP. Here is a fragment that does it, using a connection string and a SMO database connection. After the BCP session is completed for the databases, all the database constraints are reenabled (they are automatically disabled by BCP to allow the import to succeed whatever order you import the tables).

    # for every table in the database .....
    $DestinationDatabaseObject.Tables | where { $_.IsSystemObject -eq $false } |
foreach{
  if ($SQLUserName -eq '')<# OK. Easy, a trusted connection #>
  {
    #native format -n, Trusted connection -T
    $Progress = BCP "$($_.Schema).$($_.Name)"  in  "$($directory)\$($_.Schema)_$($_.Name).bcp"   `
            -n -T "-d$($csb.Database)"  "-S$($srv.Name)"
  }
  else <# if not a trusted connection we need to provide a userid and password #>
  {
    $Progress = BCP "$($_.Schema).$($_.Name)"  in  "$($directory)\$($_.Schema)_$($_.Name).bcp"  `
            -n "-d$($csb.Database)"  "-S$($srv.Name)"  `
            "-U$($srv.ConnectionContext.Login)" "-P$($srv.ConnectionContext.Password)"
  }
  if (-not ($?) -or ([string]$progress -like '*error*')) # if there was an error
  {
    throw ("Error with data import of $($directory)\$($_.Schema)_$($_.Name).bcp $($progress -like '*error*')");
  }
  else
  {
    Write-output "reading in to $($_.Schema).$($_.Name) $($directory)\$($_.Schema)_$($_.Name).bcp"
  }
}
<# Remember to set the constraints going #>
$null = $srv.ConnectionContext.ExecuteNonQuery("EXEC sp_msforeachtable 'ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all'")

Passwords-in-script

This only applies if one or more of your targets is on a server that can only be reached via SQL Server credentials, such as a SQL Server in a container, or an Azure connection. Here we simply leave out the password from the connection string. We store the passwords in an encrypted XML file in your user area, using Import-CliXml and Export-CliXml in PowerShell. The downside is that you will be asked for the password to create the password file, if the file isn’t there. The Export-Clixml cmdlet encrypts credential objects by using the Windows Data Protection API. The encryption ensures that the contents of the credential object can be decrypted only by your user account and on only that computer.

The reason that a connection string is so useful is that everything about a connection is stored there and read by the driver, even such things as connection timeouts, and this information is easy to extract once you turn it into a .NET object. Once it is a working script, this should be pretty safe, but it isn’t if you wander off to lunch without locking the Windows session.

This complicates the code and has the disadvantages, and maybe also the advantages, of making the code look more complicated than it is.

The final code

Here is the complete PowerShell code.

Import-Module SqlChangeAutomation
import-Module sqlserver
<# Here you specify the source of the release, a source control directory in our
case but it could
be a single build script or a connection string to a source database
You need to specify the project details under 'projects' and then a list of connection strings
for each of the targets on which you'd like a clean install
#>
$Databases =
@{
  'source' = 'MyPathto\MyDevDatabase\Schema-Model';
  'Data' = ' MyPathto\MyDevDatabase\Data';
  'Project' = @{
    'name' = 'KillnFill'; 'version' = '1.4.5';
    'description' = 'This demonstrates how to do a clean build'
  }
  
  'CleanTargets' = @(
    <# list of connection strings for each of the targets on which you'd like a clean install    #>
    'Server=MyInstance;Database=Thomas;User Id=MyUserName;Persist Security Info=False',
    'Server=MyInstance;Database=Richard;User Id=MyUserName;Persist Security Info=False',
    'Server=MyOtherInstance;Database=Harold;Persist Security Info=False',
    'Server=MyOtherInstance;Database=Ebeneezer;Persist Security Info=False'
    
  )
  
}
$ReleaseArtifact = $null; #start off with a null release artifact so you know when to make one
$databases.CleanTargets | foreach {
  $csb = New-Object System.Data.Common.DbConnectionStringBuilder
  $csb.set_ConnectionString($_) #we need to remove the database to connect at server level
  $databaseName = $csb.database; #we must remember it so we can put it back later
  $connectionString = $csb.Remove('database')
  # create an SMO connection get credentials if necessary
  if ($csb.'user id' -ne '') #then it is using SQL Server Credentials
  {
    $SqlEncryptedPasswordFile = `
    "$env:USERPROFILE\$($SqlUserName)-$($csb.server).xml"
    # test to see if we know about the password in a secure string stored in the user area
    if (Test-Path -path $SqlEncryptedPasswordFile -PathType leaf)
    {
      #has already got this set for this login so fetch it
      $SqlCredentials = Import-CliXml $SqlEncryptedPasswordFile
      
    }
    else #then we have to ask the user for it (once only)
    {
      #hasn't got this set for this login
      $SqlCredentials = get-credential -Credential $SqlUserName
      $SqlCredentials | Export-CliXml -Path $SqlEncryptedPasswordFile
    }
    $ServerConnection =
    new-object `
           "Microsoft.SqlServer.Management.Common.ServerConnection"`
    ($csb.server, $SqlCredentials.UserName, $SqlCredentials.GetNetworkCredential().password)
  }
  else
  {
    $ServerConnection =
    new-object `
           "Microsoft.SqlServer.Management.Common.ServerConnection" `
    ($csb.server)
  }
  
  
  try # now we make an SMO connection  to the server, using the modified connection string
  {
    $srv = new-object ("Microsoft.SqlServer.Management.Smo.Server") $ServerConnection
  }
  catch
  {
    Write-error `
          "Could not connect to SQL Server instance '$csb.server': $($error[0].ToString() +
      $error[0].InvocationInfo.PositionMessage). Script is aborted"
    exit -1
  }
  $Currentconnections = $srv.GetActiveDBConnectionCount($databaseName)
  if ($Currentconnections -eq 0)
  {
    $csb.set_ConnectionString($_)
    #restore the connection string as we need the name of the database
    if ($csb.'user id' -ne '')
    { $csb.Add('Password', $ServerConnection.Password) }
    $Db = $csb.'database'
    
    #if the database already exists, then kill it
    If (@($srv.Databases | % { $_.Name }) -contains $csb.'Database') # search for the name
    { $srv.KillDatabase($csb.'Database') } # if it is there then kill it
    # Now we create the database. First we create the SMO object
    $DestinationDatabaseObject = `
    New-Object Microsoft.SqlServer.Management.Smo.Database ($srv, $csb.'Database')
    $DestinationDatabaseObject.Create()
    # double check that it all worked
    if ($DestinationDatabaseObject.name -ne $csb.Database)
    {
      Write-error `
            "Could not create database   $($csb.'Server').$($csb.'Database')): $($error[0].ToString() +
        $error[0].InvocationInfo.PositionMessage). Script is aborted"
      exit -1
    }
    if ($ReleaseArtifact -eq $null)
    {
      #only do this once since all release artifacts will be the same
      Write-Verbose "Now creating the build object using ( $($csb.'Server').$($csb.'Database'))"
    <# we create a build artifact so we can put in the package version etc#>
      if (!(Test-Path -path "$($databases.source)\RedGateDatabaseInfo.xml" -PathType leaf))
      {
        $XMLContents = [xml]$srv.ConnectionContext.ExecuteScalar("
USE $Db
SELECT '<?xml version=`"1.0`" encoding=`"utf-16`"?>
  <DatabaseInformation>
    <DefaultCollation>'+CONVERT (varchar(256), DATABASEPROPERTYEX(Db_Name(),'collation'))+'</DefaultCollation>
    <DefaultSchema>'+name+'</DefaultSchema>
    <DefaultUser>'+name+'</DefaultUser>
    <DefaultFilegroup>'+(SELECT name FROM sys.filegroups WHERE is_default=1)+'</DefaultFilegroup>
    <DatabaseVersion>'+Convert(VARCHAR(2),ServerProperty('ProductMajorVersion'))+'</DatabaseVersion>
  </DatabaseInformation>'
FROM sys.schemas WHERE schema_id=1")
        $XMLContents.Save("$($databases.source)\RedGateDatabaseInfo.xml")
      }
      $buildArtifact = $databases.source |
      New-DatabaseProjectObject | #wrap up the script and create a build artefact
      New-DatabaseBuildArtifact `
                    -PackageId $Databases.Project.name `
                    -PackageVersion $Databases.Project.version `
                    -PackageDescription $Databases.Project.Description
      Write-verbose `
              "Now creating the release object using ( $($csb.'Server').$($csb.'Database'))"
      
      $ReleaseArtifact = new-DatabaseReleaseArtifact  `
                               -Source $buildArtifact  `
                               -Target $csb.ConnectionString   `
                               -AbortOnWarningLevel None -SQLCompareOptions IgnoreSystemNamedConstraintNames
    }
    #at this point, we will have the release object so we can use it
    Use-DatabaseReleaseArtifact -InputObject $ReleaseArtifact -DeployTo $csb.ConnectionString
    <# find out what the data directory is #>
    $directory = $Databases.data
    <# check that the directory exists #>
    if (-not (Test-Path -PathType Container $directory))
    {
      Write-error "Could not find data directory $directory"
      exit -1
    }
    
    # for every table in the database .....
    $DestinationDatabaseObject.Tables | where { $_.IsSystemObject -eq $false } |
    foreach{
      if ($SQLUserName -eq '')<# OK. Easy, a trusted connection #>
      {
        #native format -n, Trusted connection -T
        $Progress = BCP "$($_.Schema).$($_.Name)"  in  "$($directory)\$($_.Schema)_$($_.Name).bcp"   `
                -n -T "-d$($csb.Database)"  "-S$($srv.Name)"
      }
      else <# if not a trusted connection we need to provide a userid and password #>
      {
        $Progress = BCP "$($_.Schema).$($_.Name)"  in  "$($directory)\$($_.Schema)_$($_.Name).bcp"  `
                -n "-d$($csb.Database)"  "-S$($srv.Name)"  `
                "-U$($srv.ConnectionContext.Login)" "-P$($srv.ConnectionContext.Password)"
      }
      if (-not ($?) -or ([string]$progress -like '*error*')) # if there was an error
      {
        throw ("Error with data import of $($directory)\$($_.Schema)_$($_.Name).bcp $($progress -like '*error*')");
      }
      else
      {
        Write-verbose `
                "reading in to $($_.Schema).$($_.Name) $($directory)\$($_.Schema)_$($_.Name).bcp"
      }
    }
<# Remember to set the constraints going #>
    $null = $srv.ConnectionContext.ExecuteNonQuery("EXEC sp_msforeachtable 'ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all'")
  }
  else { Write-warning "The Database $databaseName on $($csb.server) 
       had $Currentconnections connections to it so couldn't be refreshed"
  }
}

Conclusions

It is a curious thought that we often waste time synchronizing development databases to bring them up to the latest version when it is often better, quicker and almost easier to do a clean build and, if you need data, just BCP the data into the empty database.

There comes a point when the amount of data needed precludes this option and a tool like SQL Clone is needed. However, for smaller data volumes, SCA provides a have a nice clean version of the data for development work every time a new version is released.

 

Share this post.

You may also like