On Comparing Tables in SQL Server

Comments 12

Share to social media


There are several reasons why you might need to compare tables or results.

  •  Sometimes, one just needs to know whether the tables contain data that is the same or different;  No details: just yes or no.  This is typical with test assertions, where you just need to know whether your routine or batch produces a result with the right data in it. when provided with particular vales for the parameters. It is either wrong or right
  • Occasionally, you need to know what rows have changed without, maybe, being particular about which columns changed and how.
  • There are times when you have a large table in terms of both columns and rows, and you need something that shows you specifically the column(s) that changed their value. You might also want this when tracking down a bug in a routine that might otherwise require you to wasting time scanning ‘by eye’.

We’ll be tackling these three rather different tasks in SQL

If two tables have a different number of rows, they can’t of course be the same. However, there are times when you need to know whether Table_B contains all the rows of Table_A, without differences. If  you wish more detail, you might even wish to know the rows in either table that aren’t in common, or the common rows, as indicated by the primary key,  that were different.  Why stick at comparing just two tables? There are ways of comparing as many as you need. (as, for example, when you’re comparing the metadata in several database snapshots). Yes, there are many variations

You’ve got tools and features to do this stuff, surely?

There is always a place for tools like SQL Data Compare,  TableDiff, tSQLt  or  Change Data Capture. A lot depends on circumstances and the type of task.  The problem of doing audits on changes to data in a live system  is a separate topic, as is the synchronization of tables and databases.  Comparison of XML documents are also out of scope. We are going to deal purely with the routine comparison of the data in tables

I’m most likely to use TSQL techniques to compare tables when:


In the course of developing a database, a lot of tables get compared. It’s not just the big stuff: Every table-valued function, for example, needs a test harness in the build script that makes sure it does what you think it should do  under all conceivable test circumstances, and incorporating all the nasty edge cases where it has been caught by the testers in the past.  Every stored procedure needs a test to make sure that the process that it executes does exactly what is intended and nothing else.

There was a time that the build activity was rather leisurely, but when you’ve got a nightly build and integration test, it is best to automate it entirely and be rid of the chore.


When you are automating the loading of data into a system, you often need to test various conditions. Do you need to update existing versions of the rows as well as inserting the new ones? Do you need a trap to prevent duplicate entries, or even  delete  existing entries?

Setting up the test data.

The scripts in this article all use a table from the venerable PUBS database. We’re going to use the authors table, but will beef up the number of rows a bit to 5000 in order to get a size that is a bit more realistic. I’ve provided the source for the table with the article.

I then created a copy of the table …

And then altered some of the rows.

So now the two tables should be predominately the same with a few minor changes in the address field

Testing to see if tables are different.

Sometimes you just want to know if tables are the same. An example of this would be checking that a TVF is working properly by comparing its result to that of an existing table with the correct results. The usual way to do this is with the CHECKSUM()group of functions in SQL Server, because they are very quick.

Using Checksums

You can use the BINARY_CHECKSUM function to check whether tables are the same: well, roughly the same. It is fast, but it is not perfect, as I’ll demonstrate in a moment. If you have a series of tests, for example it is generally sufficient.

For this to work, your table must not have TEXT, NTEXT, IMAGE or CURSOR (or a SQL_VARIANT with any of these types) as its base type. Nowadays, this is increasingly rare, but If you have any sort of complication, you can coerce any column with one of the unsupported types into a supported type. In practice, I generally use a routine that checks the metadata and does this automatically, but it isn’t pretty.

In a working version you would probably want to specify the list of columns, especially if you are having to do an explicit coercion of datatypes, or if you are checking  just certain columns,

Neither BINARY_CHECKSUM() nor its plain sister CHECKSUM() are completely accurate in telling you if something has changed in a row or table. We’ll show this by looking at the common words of the English language, contained in a table called CommonWords.. You’d expect them all to have a different checksum, but that’s not the case.

… giving the result …

Armed with this information, we can quickly demonstrate that different strings can have the same checksum

All these will; have the same checksum, as would …


… gives you different checksums like this…

The sister function CHECKSUM()

… finds them to be all the same, because it is using the current collation and my collation for the database is case-insensitive. CHECKSUM() aims to find strings equal in checksum if they are equal in a string comparison.

So, the best you can say is that there is a strong likelihood that the tables will be the same but if you need to be absolutely certain, then use another algorithm.

If you don’t mind difference in case in text strings, then you can use CHECKSUM() instead of BINARY_CHECKSUM()

The great value of this technique is that, once you’ve calculated the checksum that you need, you can store it as a value in the column of  a table instead of needing the original table and therefore you can make the whole process even faster, and take less time. If you are storing the checksum value returned by CHECKSUM() make sure you  check against the live table with a checksum generated with the same collation.

Here is a simple example of a ‘what’s changed’ routine.

…which gives…

And then we just tidy up.

Of course, you could use a trigger but sometimes you might want just a daily or weekly report of changes without the intrusion of a trigger into a table.

Using XML

One general possibility is to compare the XML version of the two tables, since this does the datatype translation into strings for you. It is slower than the Checksum approach but more reliable.

Here, you can specify the type of comparison by specifying the collation.

or you can do this, comparing data in tables ..

… by calculating a checksum of the XML version of the table. This allows you to store the checksum of the table you are comparing to.

Finding where the differences are in a table

The simplest task is where the tables have an identical number of rows, and an identical table structure. Sometimes you want to know which rows are different, and which are missing.  You have, of course, to specify what you mean by ‘the same’, particularly if the two tables have different columns. The method you choose to do the comparison is generally determined by these details.

The UNION ALL … GROUP BY technique

The classic approach to comparing tables is to use a  UNION ALL for the SELECT statements that include the columns you want to compare, and then GROUP BY those columns. Obviously, for this to work, there must be a column with unique values in the GROUP BY, and the primary key is ideal for this. Neither table are allowed duplicates. If they have different numbers of rows, these will show up as differences.

If one of the tables has a duplicate, then it will give you a false result, as here, where you have two tables that are very different  and the result tells you that they are the same! For this reason, it is a good idea to include the column(s) that  constitute the primary key, and only include the rows  once!

… giving …

The technique can be used for comparing more than two tables.  You’d just need to UNION ALL the tables you need to compare and change the HAVING clause to filter just the rows that aren’t in all the tables.


You can now use the much cleaner and slightly faster EXCEPT.

This shows all the rows in authors that are not found  in authorsCopy.  If they are the same, it would return no rows

I’m only using SELECT * to keep things simple for the article. You’d normally itemize all the columns you want to compare.

This will only work for tables with the same number of rows because, if authors had extra rows, it would still say that they were different since the rows in Authors that weren’t in authorsCopy would be returned. This is because EXCEPT returns any distinct values from the query to the left of the EXCEPT operand that are not also found from the query on the right

This, hopefully shows what I mean

…yields …

…whereas …

..results in …

This feature of EXCEPT could be used to advantage if you particularly wish to check that TableA is contained within TableB. So where the tables have a different number of rows you can still compare them.

You might not want to compare all columns.  You should always specify those columns you wish to compare to determine ‘sameness’. If you only wanted to compare the Address for example, you’d use …

The Outer Join technique

There is also the technique of the outer join.  This is a more general technique that give you additional facilities. If, for example, you use the full outer join then  you can get the unmatched rows in either table. This gives you a ‘before’ and ‘after’ view of alterations in the data.  It is used more generally in synchronisation to tell you what rows to delete, insert and update.

We’ll just use the technique to get the altered rows in authorsCopy

As you can see, there are difficulties with null columns with this approach, but it is as fast as the others and it gives you rather more versatility for your comparisons.

Locating the differences between tables

You may need a quick way of seeing what column and row has changed. A very ingenious way of doing this was published recently. It used XML. ‘Compare Tables And Report The Differences By Using Xml To Pivot The Data’ (editor’s note: link deprecated). It is clever, but too slow. The same thing can be done purely in SQL.  Basically, you perform a column by column comparison of data based on the primary key, using a key/value pair. If you do the entire table at once it is rather slow: The best trick is to do this only on those rows where you know there is a difference.

in our example, this would give:

This technique rotates the rows of the tables that have differences into an Entity-attribute-value (EAV) table so that differences within a row can be compared and displayed.  It does this rotation by UNIONing the name and string-value of each column.  This technique works best where there are not a large number of differences.


There is no single ideal method of comparing the data in tables or results. One of a number of techniques will be the most relevant  for any particular task. It is all down to precisely the answers you need and the type of task. Do you need a quick check that a table hasn’t changed, or do you need to know precisely what the changes are?  SQL is naturally fast at doing this task and comparisons of tables and results is a familiar task to many database developers.

If there is a general rule, I’d say that  exploratory or ad-hoc work  needs a tool such as SQL Data Compare, whereas  a routine process within the database  requires a hand-cut SQL technique.

The source to the table, and the insert-statements to fill it to 5000 rows is in the link below.