Intelligent Database Design Using Hash Keys

Your application may require an index based on a lengthy string, or even worse, a concatenation of two strings, or of a string and one or two integers. In a small table, you might not notice the impact. But suppose the table of interest contains 50 million rows? Then you will notice the impact both in terms of storage requirements and search performance.

Using Hash Keys instead of String Indexes

Your application may require an index based on a lengthy string, or even worse, a concatenation of two strings, or of a string and one or two integers. In a small table, you might not notice the impact. But suppose the table of interest contains 50 million rows? Then you will notice the impact both in terms of storage requirements and search performance.

You don’t have to do it this way. There is a very slick alternative, using what are known alternatively as hash buckets or hash keys.

What is a Hash?

In brief, a hash is the integer result of an algorithm (known as a hash function) applied to a given string. You feed said algorithm a string and you get back an integer. If you use an efficient hash function then there will be only a small chance that two different strings will yield the same hash value. If this does occur, then it is known as a hash collision. Suppose that you fed this article into a hash algorithm, then changed one character in the article and fed the article back into the hashing algorithm: it would return a different integer.

Hash Keys in Database Design

Now, how can we apply hash leys intelligently in our database designs? Suppose that we have these columns in the table of interest:

Column Name

Data Type

Name

Varchar(50)

GroupName

Varchar(50)

A compound index on both these columns would consume 50 + 50 characters per row. Given 50 million rows, this is a problem. A hash key based on these two columns is vastly smaller (4 bytes per row). Even better, we don’t have to store the hash keys themselves – or more accurately, we have to store them just once. We create a calculated column whose formula is the hash key of these two columns. Now, we index the hash key row and don’t bother with the index on the two columns mentioned above.

The basic process is as follows:

  1. The user (whether a human or an application) queries the values of interest
  2. These values are then converted into a hash key
  3. The database engine searches the index on the hashed column, returning the required row, or a small subset of matching rows.

In a 50 million row table, there will undoubtedly be hash collisions, but that isn’t the point. The set of rows returned will be dramatically smaller than the set of rows the engine would have to visit in order to find an exact match on the original query values. You isolate a small subset of rows using the hash key and then perform an exact-string match against the hits. A search based on an integer column can be dramatically faster than a search based on a lengthy string key, and more so if it is a compound key.

Hash Key Algorithms using the Checksum Function

There are several algorithms available, the simplest of which is built into SQL Server in the form of the Checksum function. For example, the following query demonstrates how to obtain the hash key for any given value or combination of values:

This results in the following rows (clipped to 10 for brevity):

Name

GroupName

Hashkey

Tool Design

Research and Development

-2142514043

Production

Manufacturing

-2110292704

Shipping and Receiving

Inventory Management

-1405505115

Purchasing

Inventory Management

-1264922199

Document Control

Quality Assurance

-922796840

Information Services

Executive General and Administration

-904518583

Quality Assurance

Quality Assurance

-846578145

Sales

Sales and Marketing

-493399545

Production Control

Manufacturing

-216183716

Marketing

Sales and Marketing

-150901473

You have a number of choices as to how you create the hash key. You might elect to fire an INSERT trigger, or use a stored procedure to create the hash key once the values of interest have been obtained, or even to execute an UPDATE query that creates the hash keys and populates the hash column retroactively (so that you can apply this technique to tables that already contain millions of rows). As stated above, my preferred solution is to “store” the hash keys in a calculated column that is then indexed. As such, the index contains the hash keys but the table itself does not.

Using this technique, you might approach the problem as follows, assuming that the front end passes in the target values for Name and GroupName:

Conclusion

This approach can yield considerable performance benefits and I encourage you to test it out on your own systems. The technique, as presented here, assumes that the search targets exist in a single table, which may not always be the case. I am still experimenting with ways to use this technique to search joined tables, and when I come up with the best approach, I will let you know.