SQL Server Performance Testing

When faced with two viable solutions to a badly compromised database design, one using clustered indexes and the other compound primary keys, Grant Fritchey took the only sensible route: he gathered hard performance data...

Clustered indexes and primary keys

By Grant Fritchey

The company I work is for is implementing a custom system, built in Visual Studio and running against SQL Server 2000, to replace some old mainframe applications. It’s an all new database design that will meet all of the old application data requirements and a whole slew of new ones. I was responsible for the initial design, which I completed and handed over to a development team. I had been working quite closely with them for some time on the application, but I got pulled in other directions and the app and database had proceeded without me. One day, the development team raised a concern that the database was far too slow to support the anticipated transaction load. I began my investigation.

The problem

The application in question has a few data requirements that are different than the average application. Basically, at its heart, it is a document management system. However, every change to large sections of the data has to be stored historically, so that we have multiple versions of pieces of a given document. Not all of the data stored is recreated for each version. We needed to know that change ‘A’ was included in version ‘1’, but that change ‘B’ was in version ‘3’ and, on a completely different piece of data, read another table, and know that change ‘Q’ was included in version ‘2’. The data looks something like this:

Table 1

Version

Value

1

‘A’

2

3

‘B’

Table 2

Version

Value

1

‘Some other value’

2

‘Q’

3

When queries are run, the latest data in Table 1 is ‘B’ from version three and the latest data in Table 2 is ‘Q’ from version 2.