How to Fix Ten SSDT Deployment Snags, With or Without ReadyRoll

Comments 0

Share to social media

Redgate ReadyRoll, which features in this article, is now part of SQL Change Automation.
Find out more

In this article, which follows on logically from one I wrote about Solving Visual Studio database delivery problems with ReadyRoll, I’ll be describing the ten most common SSDT database deployment problems that you’re likely to hit when doing database development within Visual Studio. This article will not just show you how to work around the problems using the tools available in SSDT, but also how a solution might look when implemented using a migrations-based approach to database development. I’ve used Redgate ReadyRoll for these examples because I designed and originally built this application, so I’m very familiar with it. I was inspired to create it after hitting the sort of problems that I’ll be describing.

There are certain deployment activities that are likely to throw up problems. These are:

  1. Splitting or merging tables
  2. Splitting or merging columns
  3. Constraints and computed columns repeatedly dropped and recreated
  4. Changing from system type to user-defined type
  5. Preventing column order differences causing table rebuilds
  6. Adding NOT NULL columns without a DEFAULT
  7. Handling index rebuilds
  8. Avoiding table locking
  9. Preventing log growth when adding columns
  10. Preserving the start value of a SQL Sequence object

I’ve designed this article so that, if you’re in the process of trying to resolve a particular issue, you can jump to the relevant section. If not, browse through them and discover how you can resolve the issue with either the SSDT or ReadyRoll approach to database change management.

#1 – Splitting or merging tables

If you attempt to split or merge data in a table, then this will trigger SSDT’s data-loss safeguard, resulting in the deployment being terminated with an error. However, if you disable the safeguard, you will need to use pre- and post-deployment scripts to move the data from the source table to the destination table to prevent data loss. The same method applies when merging tables.

In the pre-deployment script, you create a new temporary table and insert data into it from the source table. In the post-deployment script, you then move the data from the temporary table to the new table, and delete the temporary table. Finally, you need to remember to insert guard statements to ensure that the pre-and post-deployment scripts are written defensively. This is done to prevent the operation being repeated in subsequent deployments even if it is accidentally executed.

When splitting a table with ReadyRoll, the generated migration script (see this article to find out how it works) can be modified to insert the data into the new table prior to deleting the column from the existing table, as follows:

Similarly, when merging a table with ReadyRoll, the migration script can be modified to migrate the data to the destination table prior to deleting the old table:

[Source: Migrating Data in SSDT without losing it, blog post by Ed Elliott]

#2 – Splitting or merging columns

If you split or merge columns using SSDT, you risk deleting the original column, and with it the data in that column, and end up by actually creating two empty columns. SSDT’s comparison engine has no way of knowing how to populate the new column based on the contents of the old column, because there is no way that the process can be sure what the developer intended by the chain of actions.

With SSDT, once again the work-around is to write pre- and post-deployment scripts to create a new temporary column, insert data into it from the source column, move the data to the new column, and delete the temporary column.

When splitting a column with ReadyRoll, you can modify the migration script to populate the new columns by splitting the data within an UPDATE statement, before dropping the old column:

When merging columns using ReadyRoll, the migration script can be modified to populate the new column by concatenating the data within an UPDATE statement, before dropping the old columns:

[Source: Migrating Data in SSDT without losing it, blog post by Ed Elliott]

#3 – Constraints and computed columns repeatedly dropped and recreated

SSDT will sometimes attempt to synchronize an object, even though the object defined in your project files has already been deployed.

For example, if you use the BETWEEN clause within a check constraint, it will be replaced with a greater-than-and-less-than expression.

Another example, well-described by Peter Schott, happens when you include a computed column within your project that uses the CONVERT function: at deployment time, any use of CONVERT will be replaced with an equivalent CAST expression. This altered version of your expression becomes the canonical definition within your table, resulting in drift between your deployed table and the table definition in your SSDT project.

During the next deployment, SSDT will attempt to resolve the drift by dropping and recreating the object. However, because the canonical definition will be changed each time, the objects can never be fully brought into sync. This can result in slow deployments in larger databases, as every time the project is deployed, the affected objects must be dropped and recreated.

The SSDT work-around is to use the Schema Compare tool after deployment to import the canonical definitions of the tables by reverse-engineering them from your live database into your project. This resolves the drift and prevents unnecessary re-deployments of the objects. For more information on this work-around, see MSDN.

To avoid having to resolve the drift after a deployment, ensure that you always write your expressions in the canonical way that SQL Server stores them. Other than the examples mentioned above, there aren’t any published guidelines on how to write your expressions in the canonical way, so unfortunately a degree of trial-and-error is involved.

ReadyRoll is not affected by this issue, as it always uses the canonical versions of your objects as the basis of script generation.

If you happen to include any custom-written migrations that contain constraint or computed column expressions, ReadyRoll will not attempt to re-synchronize the objects in the target database at deployment time. This is due to the incremental nature of migration-driven deployment: rather than using the state of the target database to calculate the deployment script, ReadyRoll simply uses a log table to determine which migrations need to be applied.

[Source: SSDT: Tips, Tricks, and Gotchas blog post by Peter Schott]

#4 – Changing from system type to user defined type

There is a problem that happens when changing the data type of an existing column to a user defined type. A user in the MSDN forums reported that the deployment script that SSDT generates will attempt to cast the column directly to the user defined type, rather than its underlying system type. Because CAST only supports system-defined types, this will cause the deployment to fail. Note that the issue appears to be limited to columns of a numeric base-type.

The operation must therefore be implemented outside of SSDT as a manually-executed “pre-compare script” to prevent SSDT from generating the script containing the invalid SQL. Once the pre-compare script has been executed, the SSDT script can then be generated and deployed.

ReadyRoll’s script-generation engine, which uses SQL Compare under-the-hood, is not affected by this issue.

[Source: Incorrect casting of Data types while publishing database projects using VS2015 MSDN forum post by Vamsik87]

#5 – Preventing column order differences causing table rebuilds

Update 2017-04-19: SSDT/DacFX 17.0 now allows you to ignore the column order when deploying your database

In SSDT, there is no option that allows you to specify that column order should be ignored, either for schema compare or deployment. If differences in the order of columns are spotted in a deployment, SSDT will initiate a table rebuild operation which can incur an unacceptable performance cost for huge tables.

Despite there being no setting to ignore column order, there are some specific use cases where a solution may be possible, using a deployment contributor. These are used to modify the behaviour of the build and deployment actions for database projects in Visual Studio, as SQL Server developer Ed Elliott explains in his informative blog post.

ReadyRoll’s script generation engine ignores column ordering by default, so it is not affected by this issue. However for those cases when you need to control the column order, you can configure the SyncOptionIgnoreColumnsOrder option within your project’s .sqlproj file to affect ReadyRoll’s synchronization behaviour.

[Source: Schema Comparisons – Ignore column order MSDN forum post by zicom]

#6 – Adding NOT NULL columns without a DEFAULT

When adding a NOT NULL column without a default, the existing values must be populated. Although it is possible to set this to a fixed value, this may cause problems if you need this to be derived from another column or table.

SSDT will cope with the basic case via the Generate smart defaults option in Publish / Advanced Deployment Settings, as explained in this helpful blog post by Jamie Thomson, Lead Developer at Dunnhumby.

For more complex requirements, custom SQL in a post-deployment script can include the UPDATE logic, but this means that you must initially deploy the column without the NOT NULL constraint. Once the column has been populated, the column can be updated in the project to include the NOT NULL constraint and re-deployed. At this point, it may be prudent to also remove the post-deployment script from the project.

With ReadyRoll, the migration script can be edited directly to add the appropriate UPDATE logic. For example, after the ReadyRoll DbSync tool has generated the script containing an ALTER TABLE ADD COLUMN statement, an UPDATE FROM statement can be appended to the script to populate the new column with data from another table/column. No defensive logic is required because ReadyRoll automatically ensures that the operation is performed once only.

An example of the script you’ll need is as follows:

Note: Some refactoring patterns suggest keeping the old column/table in place until the new columns are bedded in, because this allows for a staged release with the option to roll back. In the above example, the DROP TABLE statement could be placed into a script within a “Draft” project sub-folder (which will not be executed at deployment time) and then later moved into the “Migrations” folder as part of a subsequent release.

#7 – Index rebuilds

There are certain operations that require an index to be rebuilt, but SSDT will script this without WITH (ONLINE=ON). This clause is a feature of SQL Server Enterprise edition that avoids the need for downtime by allowing an index to remain accessible while it is rebuilt.

In SSDT, you can use a deployment contributor to modify the CREATE INDEX statement at the point that it is being generated. This involves authoring and compiling a C# library to be used at deployment time in conjunction with the SqlPackage command.

With ReadyRoll, you simply edit the generated migration script to include the WITH (ONLINE=ON) clause within the index change statement. For example:

[Source: Top 5 Automated Deployment Gotchas session at SQLBits 2016 by Simon D’Morias]

#8 – Avoiding table locking

When you use a pre- or post-deployment script to update lots of rows at the same time, a large number of DML operations running in a transaction in the same batch will result in a table lock. Simon D’Morias provides an excellent example script that demonstrates this behaviour:

“The idea was to demonstrate how small batches need to be to prevent table locks happening (it hard coded to 5000 rows/locks). Therefore you really need to use tiny batches to keep a highly active table online during a batch population.”

To reproduce this behaviour, use SSMS to run the following script, and note the number of rows returned from each test set (see the inline comments for more information).

The first “UPDATE” statement (Test 1) demonstrates the effect of table locking as a consequence of thousands of rows being updated at once. In various scenarios, SSDT might generate a statement like this when a change to a table is required.

The second UPDATE (Test 2) demonstrates how updating the table in smaller batches can result in a much more desirable locking behaviour.

To avoid table locking when updating large amounts of data, replace the single UPDATE statement with a loop that processes the data updates in batches:

Note: The above code will work with both SSDT and ReadyRoll, however it contains a <Migration/> metadata tag in the header that is specific to ReadyRoll. This metadata ensures that the script is executed once only, and instructs ReadyRoll to run it without a user transaction. With SSDT, you’ll need to remove the script from the project after the change has been applied to prevent the operation from being re-executed.

[Source: Top 5 Automated Deployment Gotchas session at SQLBits 2016 by Simon D’Morias]

#9 – Preventing log growth when adding columns

When SSDT adds a column in the middle (rather than the end) of a table, it creates a new table, inserts the data from the old table, then drops the old table and renames the new table to match the name of the old one.

This is fine for smaller tables but with very large tables, it can result in an unmanageable amount of log growth. The solution in SSDT is to write a deployment contributor to search for the table migrations and change the insert portion that does the insert in batches. This can be complicated, but SQL Server developer Ed Elliott shows how to do it in a useful walkthrough on his blog.

Using ReadyRoll, you can edit the generated migration script to perform the operation in batches, rather than with one monolithic insert statement. For an example of how to achieve this, see the solution outlined in #8 Avoiding table locking.

[Source: Deploy INSERTs in Batches blog post by Ed Elliott]

#10 – Preserving the start value of a SQL Sequence object

Some deployment targets may have sequences that begin at different values, which need to be preserved. For example, if you want to migrate all of your IDENTITY columns to use SQL Sequence objects instead, your start value will probably vary depending on the number of rows in the target database environment.

Because SSDT tries to make the source and the target the same, the start value of the sequence in the target environment may be reset with each deployment, which isn’t always desirable.

The solution is to either use a deployment filter to ignore the starting value, or the SQL Server system stored procedure sp_sequence_get_range. Front-end developer Matthew Tester shows how to do this in his instructive blog post.

With ReadyRoll, the same result can be achieved by simply editing the generated migration script to retrieve the current identity value of the table and use it as the starting value for the newly-created sequence object:

ReadyRoll won’t cause the sequence to be reset with each deployment, because the script will execute a single time only against a given target environment.

Note: If you’re deploying to a highly-active Production environment, there are some caveats with the IDENT_CURRENT function to be aware of which could result in the wrong number being returned. You may want to consider adding logic to lock the table within your migration to ensure deployment atomicity.

[Source: How to deploy a SQL Sequence using SSDT without it always resetting blog post by Matthew Tester]

Conclusion

Although SSDT provides a way for databases to be designed and deployed in VS, there are the occasional deployment issues which can, if not detected during the development lifecycle, break the deployment pipeline. Some issues can be worked-around using pre or post-Deployment scripts, however other issues require that extensions to the build engine itself be developed in order to resolve them. Some scenarios cannot be resolved within SSDT at all, requiring scripts to be run outside of the deployment pipeline before the SSDT deployment. To find out more about SSDT database projects, visit the product website on MSDN.

ReadyRoll provides an alternative way to work on databases in VS that overcomes those deployment problems through customization of the migrations generated by its schema comparison engine.

References

About the author

Daniel Nolan's contributions