This question was recently posed by one of our customers:

For the most part, we have resisted putting excess structure around the way that changes are deployed to your database.

Unlike database projects that use the declarative-style of deployment, which work by synchronizing a source-controlled model of your schema to a target database, we opted for the imperative approach of migrations which hand the control of state transition over to you. The idea being that you don’t spend your time fighting a DSL which may offer productivity gains for common types of schema changes but tie you up in knots when it comes to handling the edge cases.

ReadyRoll does, however, add a certain amount of structure to the way that exceptions are handled within your scripts, and specifically how this affects the outcome of your deployment (ie. whether the migrations are committed or rolled-back).

Here I will go through some of the conventions used in ReadyRoll deployments by examining the project artifacts; specifically the T-SQL file that is generated when you perform a Build in Visual Studio, eg.AdventureWorks\bin\Debug\AdventureWorks.sql. This file contains a concatenated list of zero or moreDeploy-Once or Deploy-Change migration scripts from your ReadyRoll database project.

SQLCMD for deployment flexibility

The first thing you will see when you open the deployment script is the SQLCMD header:

SQLCMD is a SQL Server utility with a unique scripting syntax that provides us with two key advantages over deploying with plain old T-SQL:

  • The :on error exit directive ensures that, if any statement within the script raises an unhandled exception, execution is immediately halted at the current batch (rather than simply continuing to the next batch after GO)
  • Support for variables which can be passed via the SQLCMD.EXE command line tool. Those variables can be used throughout your scripts using the $(VariableName) syntax. This is a feature that we’ve exploited extensively within ReadyRoll’s Octopus Deploy integration.

XACT_ABORT to control execution flow

The next thing you’ll notice is the use of the SET XACT_ABORT ON predicate. I’ll defer to MSDN on this one:

When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When SET XACT_ABORT is OFF, in some cases only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing.

To avoid the need to add excessive amounts of error-handling logic to your scripts, we explicitly set this predicate ON as part of the build and prevent you from setting it to OFF in your own scripts (unless the Custom transaction-handling mode is used; more on that later).

Putting it together

The combination of SQLCMD and XACT_ABORT ensures that, if an exception occurs at any point during deployment, the execution is halted and the connection dropped at the current statement.

Additionally, any Deploy-Once or Deploy-Change migrations that have been executed thus far will be rolled-back:

One thing conspicuously missing here is a ROLLBACK statement at any point. This is because a rollback is implicit in the use of the XACT_ABORT predicate: when an unhandled error occurs, SQL Server immediately performs a rollback and raises an error to the client (in our case, SQLCMD.EXE).

Taking control with exception handling

For the most part, we hope that the structure we’ve added around your migrations means you’ll never need to worry about how transactions are handled. However there are some use cases for which you might want to exercise greater control over the flow of execution.

For example, say you want to write some details of an error to a log table. Typically, your log records would simply be rolled-back along with your other changes when an error occurs.

But with a TRY/CATCH block, you have the ability to capture exceptions that are raised within your batch and issue a ROLLBACK yourself. For example:

This code will raise an exception at line 13, causing the prior operations in the batch to be rolled-back. However the new row in the error_log will persist even after the script is halted (via the RAISERROR), because it occurs in a separate transaction to the statements within the TRY block.

Note that not all exceptions can be contained by TRY/CATCH blocks: if for some reason your batch fails to compile, the error will not be caught and the exception will instead be thrown to the client. For example, a primary key violation error (like you might get with an INSERT statement) will be caught by theCATCH block, however a reference to a non-existent object will not.

How are syntax errors handled?

ReadyRoll runs all of your migrations through the T-SQL compiler during project build, so provided your statements are not being executed as dynamic SQL (eg. using EXEC / sp_execute), you should not receive syntax errors at deployment time.

However testing your database project by doing a full deploy to a test server is still the most foolproof way of validating your T-SQL migrations.

Disabling automatic transaction handling

If your use case dictates that the migration be executed outside of a user transaction, or in a transaction that is isolated from any others in the deployment, you can control this behavior at the script level by switching to Custom transaction handling:

You might want to use Custom mode if:

  • You have a long-running operation like a BULK INSERT and you want to commit batches of rows at a time
  • You want to automate the deployment of a server-level object such as a linked server
  • You need to perform an ALTER DATABASE operation on the current database
  • You need to add full-text indexes to your database, or perform some other operation which cannot be done within a user transaction

Prior to executing your Custom-flagged migration, ReadyRoll will COMMIT any open transactions. After executing that migration, a new transaction will be opened for any remaining migrations that are pending deployment.

Share this post.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

Related posts

Also in Blog

Five ways we’ve implemented Agile marketing at Redgate

Implementing Agile working practices reminds me of Communism – if we just did it properly, everything would work perfectly! All projects would be a dream to work on, all deliveries would be early, t...

Also in Database DevOps & DLM

Announcing Redgate Data Tools in Visual Studio 2017

Visual Studio 2017 is released today, and its new features like Live Unit Testing and configurable code style rules have already been grabbing the headlines. But inside the Visual Studio 2017 installe...

Also about ReadyRoll

Solving Visual Studio database delivery problems with ReadyRoll

For .NET developers working with Visual Studio (VS), the introduction of Database Projects with SQL Server Data Tools (SSDT) brought to VS the ability to manage changes to the database schema and cod...