Quantifying Text differences in TSQL

In TSQL there is a limit to the way you can compare text strings. They're either equal or not. Sooner or later, usually when cleaning data, something more subtle is required!

Last revised: Feb 3rd 2014

When you compare two pieces of text, or strings, in SQL Server in an expression, you will get just get a value of ‘true’ returned if the two were the same, ‘false’ if they were not, or ‘null’ if one of the pieces of text was null.

A simple matter of an extra space character is often enough to tip the balance. This is quite unlike real life – If we look at two pieces of text we judge them to be the same, almost the same, quite similar, nothing like each other and many shades in-between. Surely, we need to quantify differences?

435-mediaevalSmall.jpg

Text difference algorithms are as
old as the hills- but not in TSQL

In IT applications, there are several times when one needs more of a measure of the differences between text, than a simple ‘yes they’re the same/ no they are not’. A typical problem is in finding duplicates in database entries where the understanding of ‘duplicate’ allows for minor differences. Finding a reliable algorithm for quantifying similarity in text is quite hard, especially one that is set-based. TSQL has no native means to use regular expressions and other means of making life easier for this sort of work

I find this problem quite intriguing. I think that there is a general consensus that the Levenshtein string distance algorithm is the best for giving you the difference on a character-by-character basis, and I provide some code at the end for doing this. The algorithm was developed by Vladimir Levenshtein in 1965. It tells you the number of edits required to turn one string into another by breaking down string transformation into three basic operations: adding, deleting, and replacing a character. Each operation is assigned a cost of 1. Leaving a character unchanged has a cost of 0. There are some other algorithms. I’m not at all convinced by ‘soundex’ algorithms- they don’t seem to help much. 

I decided that what I wanted was a difference based on words rather than characters. I find that the solution, the difference counter, that I give below pretty handy, though it sometimes gives a score for differences that I don’t like. Try it yourselves with a variety of strings and you’ll see it makes a pretty good attempt. It is, of course, slow, because of the tedium of breaking down text into words and white-space. In normal use, this is only done once, when importing text into the database, when it is placed in an ‘inversion table’. One can use this data to test the similarity of the original text, which is much faster. Just so as to include those stuck on SQL Server 2000, I’ve made the function use a nTEXT parameter rather than a VARCHAR(MAX) though the latter would have made for a simpler routine

“Cleaning data is not
  an exact science”

In reality, every time one comes across a requirement where one has to check for differences, there are subtle requirements that are never the same. Cleaning data is not an exact science. I generally prefer to ignore ‘white-space’, including new-lines and punctuation, when checking for differences. My approach is to break down text into words and ‘not-words’, or white-space. I refer to these as different types of token. The table function I give below allows you to define a word in terms of the characters that make up a word. This is different in other languages. The routine is generally, though not always, much faster if one uses a ‘number table’ but I decided that creating one for this article was a distraction for the reader .

With the use of the ‘parsing’ table-function, I then do a simple outer join between the two collections of words, and count the number of times that the minimum ‘best-fit’ between words changes in the sequence of words. This is of course, an approximation: I should be using sliders and other devices that use iteration. At some point one has to hand over to the computer scientists. I tend to stop at the point where the routine does the job I want.

As a flourish, I’ve provided, at the end, a variation of the function that provides a single-row table giving the first point at which the two samples of text diverge. It is really just a by-product of the first routine but I slipped it in to give a suggestion of the many ways the routine can be adapted for particular purposes. It is surprisingly handy for applications such as summary reports of the latest changes made to stored procedures!

First the ‘classic Levenshtein string distance in TSQ (using strings instead of arrays)

…and now the equivalent system for detecting word differences