Last revised: Feb 3rd 2014
When you compare two pieces of text, or strings, in SQL Server in an expression, you will get just get a value of ‘true’ returned if the two were the same, ‘false’ if they were not, or ‘null’ if one of the pieces of text was null.
A simple matter of an extra space character is often enough to tip the balance. This is quite unlike real life – If we look at two pieces of text we judge them to be the same, almost the same, quite similar, nothing like each other and many shades in-between. Surely, we need to quantify differences?
Text difference algorithms are as
old as the hills- but not in TSQL
In IT applications, there are several times when one needs more of a measure of the differences between text, than a simple ‘yes they’re the same/ no they are not’. A typical problem is in finding duplicates in database entries where the understanding of ‘duplicate’ allows for minor differences. Finding a reliable algorithm for quantifying similarity in text is quite hard, especially one that is set-based. TSQL has no native means to use regular expressions and other means of making life easier for this sort of work
I find this problem quite intriguing. I think that there is a general consensus that the Levenshtein string distance algorithm is the best for giving you the difference on a character-by-character basis, and I provide some code at the end for doing this. The algorithm was developed by Vladimir Levenshtein in 1965. It tells you the number of edits required to turn one string into another by breaking down string transformation into three basic operations: adding, deleting, and replacing a character. Each operation is assigned a cost of 1. Leaving a character unchanged has a cost of 0. There are some other algorithms. I’m not at all convinced by ‘soundex’ algorithms- they don’t seem to help much.
I decided that what I wanted was a difference based on words rather than characters. I find that the solution, the difference counter, that I give below pretty handy, though it sometimes gives a score for differences that I don’t like. Try it yourselves with a variety of strings and you’ll see it makes a pretty good attempt. It is, of course, slow, because of the tedium of breaking down text into words and white-space. In normal use, this is only done once, when importing text into the database, when it is placed in an ‘inversion table’. One can use this data to test the similarity of the original text, which is much faster. Just so as to include those stuck on SQL Server 2000, I’ve made the function use a nTEXT parameter rather than a VARCHAR(MAX) though the latter would have made for a simpler routine
“Cleaning data is not
an exact science”
In reality, every time one comes across a requirement where one has to check for differences, there are subtle requirements that are never the same. Cleaning data is not an exact science. I generally prefer to ignore ‘white-space’, including new-lines and punctuation, when checking for differences. My approach is to break down text into words and ‘not-words’, or white-space. I refer to these as different types of token. The table function I give below allows you to define a word in terms of the characters that make up a word. This is different in other languages. The routine is generally, though not always, much faster if one uses a ‘number table’ but I decided that creating one for this article was a distraction for the reader .
With the use of the ‘parsing’ table-function, I then do a simple outer join between the two collections of words, and count the number of times that the minimum ‘best-fit’ between words changes in the sequence of words. This is of course, an approximation: I should be using sliders and other devices that use iteration. At some point one has to hand over to the computer scientists. I tend to stop at the point where the routine does the job I want.
As a flourish, I’ve provided, at the end, a variation of the function that provides a single-row table giving the first point at which the two samples of text diverge. It is really just a by-product of the first routine but I slipped it in to give a suggestion of the many ways the routine can be adapted for particular purposes. It is surprisingly handy for applications such as summary reports of the latest changes made to stored procedures!
First the ‘classic Levenshtein string distance in TSQ (using strings instead of arrays)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
create FUNCTION Levenshtein_Distance(@Source nvarchar(4000), @Target nvarchar(4000)) RETURNS int AS /* The Levenshtein string distance algorithm was developed by Vladimir Levenshtein in 1965. It tells you the number of edits required to turn one string into another by breaking down string transformation into three basic operations: adding, deleting, and replacing a character. Each operation is assigned a cost of 1. Leaving a character unchanged has a cost of 0. This is a translation of 'Fast, memory efficient Levenshtein algorithm' By Sten Hjelmqvist, originally converted to SQL by Arnold Fribble http://www.codeproject.com/Articles/13525/Fast-memory-efficient-Levenshtein-algorithm */ BEGIN Declare @MaxDistance int Select @MaxDistance=200 DECLARE @SourceStringLength int, @TargetStringLength int, @ii int, @jj int, @SourceCharacter nchar, @Cost int, @Cost1 int, -- create two work vectors of integer distances @Current_Row nvarchar(200), @Previous_Row nvarchar(200), @Min_Cost int SELECT @SourceStringLength = LEN(@Source), @TargetStringLength = LEN(@Target), @Previous_Row = '', @jj = 1, @ii = 1, @Cost = 0, @MaxDistance=200 -- do the degenerate cases if @Source = @Target return (@Cost); if @SourceStringLength= 0 return @TargetStringLength; if @TargetStringLength= 0 return @SourceStringLength; -- initialize the previous row of distances -- this row is edit distance for an empty source string -- the distance is just the number of characters to delete from the target WHILE @jj <= @TargetStringLength SELECT @Previous_Row = @Previous_Row + NCHAR(@jj), @jj = @jj + 1 WHILE @ii <= @SourceStringLength BEGIN SELECT @SourceCharacter = SUBSTRING(@Source, @ii, 1), @Cost1 = @ii, @Cost = @ii, @Current_Row = '', @jj = 1, @Min_Cost = 4000 WHILE @jj <= @TargetStringLength BEGIN -- use formula to fill in the rest of the row SET @Cost = @Cost + 1 --v1[j + 1] = Minimum(v1[j] + 1, v0[j + 1] + 1, v0[j] + cost); SET @Cost1 = @Cost1 - CASE WHEN @SourceCharacter = SUBSTRING(@Target, @jj, 1) THEN 1 ELSE 0 END IF @Cost > @Cost1 SET @Cost = @Cost1 SET @Cost1 = UNICODE(SUBSTRING(@Previous_Row, @jj, 1)) + 1 IF @Cost > @Cost1 SET @Cost = @Cost1 IF @Cost < @Min_Cost SET @Min_Cost = @Cost SELECT @Current_Row = @Current_Row + NCHAR(@Cost), @jj = @jj + 1 END IF @Min_Cost > @MaxDistance return -1 -- copy current row to previous row for next iteration SELECT @Previous_Row = @Current_Row, @ii = @ii + 1 END RETURN @Cost END GO |
…and now the equivalent system for detecting word differences
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 |
IF OBJECT_ID(N'dbo.uftWordTokens') IS NOT NULL DROP FUNCTION dbo.uftWordTokens GO /*------------------------------------------------------------*/ CREATE FUNCTION [dbo].[uftWordTokens] ( @string NTEXT, @WordStartCharacters VARCHAR(255) = 'a-z', @WordCharacters VARCHAR(255) = '-a-z''' ) RETURNS @Results TABLE ( SeqNo INT IDENTITY(1, 1), Item VARCHAR(255), TokenType INT ) AS /* This table function produces a table which divides up the words and the spaces between the words in some text and produces a table of the two types of token in the sequence in which they were found */ BEGIN DECLARE @Pos INT, --index of current search @WhereWeAre INT,--index into string so far @ii INT, --the number of words found so far @next INT, --where the next search starts @size INT --the total size of the text SELECT @ii = 0, @WhereWeAre = 1, @size = DATALENGTH(@string) WHILE @Size >= @WhereWeAre BEGIN SELECT @pos = PATINDEX('%[' + @wordStartCharacters + ']%', SUBSTRING(@string, @whereWeAre, 4000)) IF @pos > 0 BEGIN IF @pos > 1 INSERT INTO @Results ( item, tokentype ) SELECT SUBSTRING(@String, @whereWeAre, @pos - 1), 2 SELECT @next = @WhereWeAre + @pos, @ii = @ii + 1 SELECT @pos = PATINDEX('%[^' + @wordCharacters + ']%', SUBSTRING(@string, @next, 4000) + ' ') INSERT INTO @Results ( item, tokentype ) SELECT SUBSTRING(@String, @next - 1, @pos), 1 SELECT @WhereWeAre = @next + @pos - 1 END ELSE BEGIN IF LEN(REPLACE( SUBSTRING(@String, @whereWeAre, 4000), ' ', '!' )) > 0 INSERT INTO @Results ( item, tokentype ) SELECT SUBSTRING(@String, @whereWeAre, 4000), 2 SELECT @whereWeAre = @WhereWeAre + 4000 END END RETURN END /* Tests: SELECT '[' + item + ']', tokentype FROM dbo.uftWordTokens('This has been relentlessly ,^----tested', DEFAULT, DEFAULT) SELECT '[' + item + ']', tokentype FROM dbo.uftWordTokens('This has been relentlessly tested !', DEFAULT, DEFAULT) SELECT item, tokentype FROM dbo.uftWordTokens('This has been', DEFAULT, DEFAULT) SELECT '[' + item + ']', tokentype FROM dbo.uftWordTokens(' <!-- 23 343.43 <div>Hello there .... -->', DEFAULT, DEFAULT) */ GO IF OBJECT_ID(N'dbo.ufnDifferencesInText') IS NOT NULL DROP FUNCTION dbo.ufiDifferencesInText GO /*------------------------------------------------------------*/ CREATE FUNCTION dbo.ufiDifferencesInText ( @Sample NTEXT, @comparison NTEXT ) RETURNS INT AS BEGIN DECLARE @results TABLE ( token_ID INT IDENTITY(1, 1), sequenceNumber INT, Sample_ID INT, Item VARCHAR(255), TokenType INT ) /* This function returns the number of differences it found between two pieces of text */ INSERT INTO @results ( SequenceNumber, Sample_ID, Item, Tokentype ) SELECT seqno, 1, item, tokentype FROM dbo.uftWordTokens(@sample, DEFAULT, DEFAULT) INSERT INTO @results ( SequenceNumber, Sample_ID, Item, Tokentype ) SELECT seqno, 2, item, tokentype FROM dbo.uftWordTokens(@comparison, DEFAULT, DEFAULT) DECLARE @closestMatch TABLE ( sequenceNumber INT, skew INT ) INSERT INTO @closestMatch ( sequencenumber, skew ) SELECT COALESCE(a.sequencenumber, b.sequencenumber), COALESCEE(MIN(ABS(COALESCE(b.sequenceNumber, 1000) - COALESCE(a.sequencenumber, 1000))), -1) FROM ( SELECT * FROM @results WHERE sample_ID = 1 AND tokentype = 1 ) a FULL OUTER JOIN ( SELECT * FROM @results WHERE sample_ID = 2 AND tokentype = 1 ) b ON a.item = b.item GROUP BY COALESCE(a.sequencenumber, b.sequencenumber) ORDER BY COALESCE(a.sequencenumber, b.sequencenumber) RETURN ( SELECT SUM(CASE WHEN a.skew - b.skew = 0 THEN 0 ELSE 1 END) FROM @closestmatch a INNER JOIN @closestMatch b ON b.sequenceNumber = a.sequenceNumber + 2 ) END GO SELECT dbo.ufnDifferencesInText('I am a piece of text', 'I am a piece of text') --0 SELECT dbo.ufnDifferencesInText('I am a piece of text', 'I am not a piece of text') --1 SELECT dbo.ufnDifferencesInText('I am a piece of text', 'I am piece a a a of text') --2 SELECT dbo.ufnDifferencesInText('I piece of text', 'I am a piece of text') --1 SELECT dbo.ufnDifferencesInText('I am a pot of jam', 'I am a piece of text') --3 SELECT dbo.ufnDifferencesInText('I am a pot of jam', 'I am a pot of jam beloved by humans') --3 SELECT dbo.ufnDifferencesInText('I am a piece of text', 'text of piece a am I') --4 SELECT dbo.ufnDifferencesInText('I am a piece of text', 'this is completely different') --5 SELECT dbo.ufnDifferencesInText('I am a piece of text', '') --5 SELECT dbo.ufnDifferencesInText('', 'I am a piece of text') --5 SELECT dbo.ufnDifferencesInText('Call me Ishmael. Some years ago -- never mind how long precisely -- having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world. It is a way I have of driving off the spleen, and regulating the circulation. Whenever I find myself growing grim about the mouth; whenever it is a damp, drizzly November in my soul; whenever I find myself involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral I meet; and especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people''s hats off -- then, I account it high time to get to sea as soon as I can. This is my substitute for pistol and ball. With a philosophical flourish Cato throws himself upon his sword; I quietly take to the ship. There is nothing surprising in this. If they but knew it, almost all men in their degree, some time or other, cherish very nearly the same feelings towards the ocean with me.' , 'Call me Ishmael. Some years ago -- never mind how long precisely -- having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world. It is a way I have of driving off the spleen, and regulating the circulation. Whenever I find myself growing grim about the mouth; whenever it is a damp, drizzly November in my soul; whenever I find myself involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral I meet; and especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people''s hats off -- then, I account it high time to get to sea as soon as I can. This is my substitute for pistol and ball. With a philosophical flourish Cato throws himself upon his sword; I quietly take to the ship. There is nothing surprising in this. If they but knew it, almost all men in their degree, some time or other, cherish very nearly the same feelings towards the ocean with me.') -- ============================================= -- Description: A routine that returns a single-row which -- gives the context of the first difference between two -- strings -- ============================================= IF OBJECT_ID(N'dbo.uftShowFirstDifference') IS NOT NULL DROP FUNCTION dbo.uftShowFirstDifference GO CREATE FUNCTION uftShowFirstDifference ( -- Add the parameters for the function here @sample NTEXT, @comparison NTEXT ) RETURNS @result TABLE ( -- Add the column definitions for the TABLE variable here first VARCHAR(2000), second VARCHAR(2000), [where] INT ) AS BEGIN DECLARE @results TABLE ( token_ID INT IDENTITY(1, 1), sequenceNumber INT, Sample_ID INT, Item VARCHAR(255), TokenType INT ) INSERT INTO @results ( SequenceNumber, Sample_ID, Item, Tokentype ) SELECT seqno, 1, item, tokentype FROM dbo.uftWordTokens(@sample, DEFAULT, DEFAULT) INSERT INTO @results ( SequenceNumber, Sample_ID, Item, Tokentype ) SELECT seqno, 2, item, tokentype FROM dbo.uftWordTokens(@comparison, DEFAULT, DEFAULT) DECLARE @closestMatch TABLE ( sequenceNumber INT, skew INT ) INSERT INTO @closestMatch ( sequencenumber, skew ) SELECT COALESCE(a.sequencenumber, b.sequencenumber), COALESCE(MIN(ABS(COALESCE(b.sequenceNumber, 1000) - COALESCE(a.sequencenumber, 1000))), -1) FROM ( SELECT * FROM @results WHERE sample_ID = 1 AND tokentype = 1 ) a FULL OUTER JOIN ( SELECT * FROM @results WHERE sample_ID = 2 AND tokentype = 1 ) b ON a.item = b.item GROUP BY COALESCE(a.sequencenumber, b.sequencenumber) ORDER BY COALESCE(a.sequencenumber, b.sequencenumber) DECLARE @first VARCHAR(2000) DECLARE @firstDifference INT DECLARE @second VARCHAR(2000) SELECT @FirstDifference = MIN(sequenceNumber) FROM @closestMatch WHERE skew <> 0 SELECT @first = '', @second = '' SELECT TOP 10 @first = COALESCE(@First, '') + item FROM @results WHERE sample_ID = 1 AND sequenceNumber >= @FirstDifference ORDER BY SequenceNumber SELECT TOP 10 @second = COALESCE(@second, '') + item FROM @results WHERE sample_ID = 2 AND sequenceNumber >= @FirstDifference ORDER BY SequenceNumber INSERT INTO @result ( first, Second, [where] ) SELECT [first] = @First, [second] = @second, [where] = @FirstDifference RETURN END GO SELECT * FROM dbo.uftShowFirstDifference('I am a piece of text', 'I am a piece of text') -- NULL SELECT * FROM dbo.uftShowFirstDifference('I am a piece of text', 'I am not a piece of text') --a piece of text not a piece of text 5 SELECT * FROM dbo.uftShowFirstDifference('I am a piece of text', 'I am piece a a a of text') --a piece of text piece a a a of 5 SELECT * FROM dbo.uftShowFirstDifference('I piece of text', 'I am a piece of text') --piece of text am a piece of text 3 SELECT * FROM dbo.uftShowFirstDifference('I am a pot of jam', 'I am a piece of text') --pot of jam piece of text 7 SELECT * FROM dbo.uftShowFirstDifference('I am a pot of jam', 'I am a pot of jam beloved by humans') -- beloved by humans 13 SELECT * FROM dbo.uftShowFirstDifference('I am a piece of text', 'text of piece a am I') --I am a piece of text of piece a am 1 SELECT * FROM dbo.uftShowFirstDifference('I am a piece of text', 'this is completely different') --I am a piece of this is completely different 1 SELECT * FROM dbo.uftShowFirstDifference('I am a piece of text', '') --I am a piece of 1 SELECT * FROM dbo.uftShowFirstDifference('', 'I am a piece of text') -- I am a piece of 1 |
Load comments