The SQL Server Sqlio Utility

If, before deployment, you need to push the limits of your disk subsystem in order to determine whether the hardware's I/O capacity meets the needs of a database application, if you need performance baselines, or if you want to identify any performance-related issues, then why not use the sqlio utility?

In my last article, “The SQL Server Sqliosim Utility,” I discussed how to use sqliosim to simulate SQL Server input/output (I/O) patterns in order to stress-test a system’s disk subsystem. Although the tool is useful for simulating user and system activity, it is not designed for performance testing. For example, you would not use sqliosim to conduct benchmark tests or to determine a storage system’s I/O capacity. For that, you need a tool such as sqlio, another free utility available from Microsoft.

Despite its name, sqlio is not part of SQL Server, yet it can be an invaluable tool for determining whether the disk drives slated to support SQL Server can meet the system’s I/O needs. The tool lets you push the limits of your disk subsystem in order to determine the hardware’s I/O capacity and to identify performance-related issues before deploying SQL Server. You can use the tool to test a variety of I/O types and sizes against various disk subsystems, including direct attached storage, network attached storage, and storage area networks.

Getting Started with the Sqlio Utility

To use sqlio, you must first download the utility from the Microsoft Download Center and install it on the Windows computer where you want to run your tests. You will of course need to be able to access any drives that your tests will target. For this article (in order to keep things simple), I ran my tests against a USB flash drive, which on my system is the E: drive.

Note: Thorough testing is a time-consuming process that hits your storage hard. If your initial interest is simply to check out the tool and see how it works, I recommend you start out small, like we’re going here. Whatever you do, don’t run the tool on a live production server or against live production storage.

When you first install the sqlio utility, you’ll find several files in the installation folder:

  • EULA.rtf – End-user license agreement that covers sqlio usage.
  • param.txt – Configuration options that identify the target drives, test file names, and specifics about running the tests.
  • readme.txt – Comprehensive details about the utility and its supported options.
  • sqlio.exe – The utility’s executable file.
  • Using SQLIO.rtf – General information about using the utility.

The readme.txt file is a particularly valuable reference for understanding the options available to the sqlio utility. Keep it close at hand. That said, the starting point for using the utility is the param.txt file, so let’s look at how it works.

Modifying the Configuration File

The param.txt file specifies options related to the test files used to test each drive. The configuration file also includes options that determine how to run those tests. When you reference the configuration file in your sqlio command, the utility uses these options to create test files in the target drives, if the files don’t already exist, and to test I/O patterns against the test files, unless those options are overridden within the command.

The configuration file includes a listing (row) for each drive to be tested. In other words, each row represents a unique I/O path. You can configure several option values associated with each path, including the target drive and file name. By default, the file includes the following information:

The first line identifies c:\ as the target drive and testfile.dat as the test file to create on that drive. The 2 refers to the number of threads to use when testing the I/O patterns, and 0x0 is the affinity mask to use to bind operations to specific CPUs. The 0x0 value indicates that all CPUs should be used, which is usually fine for most operations.

The final value, 100, reflects the size of the test file in megabytes (MB). In reality, this would be a ridiculously small file to create if you really want to test a system. You should normally aim for a file size at least two-to-four times greater than the size of the storage data cache, unless you simply want to try out the tool, like we’re doing here.

The second line of the file is a placeholder. The hash mark (#) at the beginning of the line indicates that it is a comment and will not be processed by the utility. But the second line does point to the fact that you can specify multiple I/O paths within the param.txt file, which means you can run a single test against multiple drives. Before you do that, however, you should first test the I/O paths individually to isolate and address any problems and then test them together.

For this article, I modified only the drive letter in the first line (from c: to e:) and saved the file. But you can modify the file however you need. Keep in mind, though, if you’re new to the sqlio utility, you might want to keep your first test short and simple, just to get a feel for how it all works.

Creating the Test File

Like many command-line utilities, sqlio supports multiple ways to get things done. For example, one approach to setting up your tests is to call the param.txt file for each test you run, no matter how many tests. If the test file doesn’t exist, the utility creates it. Another approach is to run the utility once for each I/O path, using the param.txt file, and then run more extensive tests after the test files have been created.

For this article, I took that the latter approach, primarily because it allowed me to verify and view the results of the first operation before going on to the next. Also, in an actual testing scenario, each phase can take a significant amount of time, so it’s nice to know that one phase has completed properly before moving on.

Because our example targets only one drive, we need to create only one test file, which means we need to run only one sqlio command, at least initially. If you want to follow along, launch a Windows command prompt window and change to the folder where you’ve installed the sqlio utility. Then, at the command prompt, enter the following command:

The command starts by calling the sqlio utility. This is followed by the -k option, which specifies the I/O type (R for read operations and W for write operations). Next, the command uses the -s option to specify the test duration in seconds, in this case, 10.

The -f option determines how data blocks are handled. Although you can specify a numerical value, which indicates the number blocks between I/Os, more often than not you’ll use the value random or sequential. A random value indicates that blocks will be chosen randomly, as is often the case with T-SQL queries. The sequential value specifies that blocks should be accessed sequentially, which is typical for operations such as backups and bulk loads. For creating our test file, the sequential option works fine.

The next argument in our command is -o, which indicates the number of I/Os that can be outstanding in a single thread. In this case, I’ve specified 8, so each thread can support up to eight outstanding I/O requests. Next we come to the -b option. This is the I/O block size in kilobytes (KB). In the example, I specified 64.

The last option in the command is -F, which points to the configuration file and the parameters defined within it. When you run the command, it creates the test file on the target drive and returns details about the execution, as shown in the following results:

The results reflect the options we specified within the configuration file and in the command, such number of threads, block size, target drive, and test file. In addition, you’ll find throughput metrics that show the number of I/Os and MB per second. Although the throughput metrics might provide an early indication of any problems, chances are it’s not until you perform more thorough testing that these values become meaningful.

Creating the Batch File

The next step is to define a set of sqlio commands that use a variety of I/O sizes and types to test each I/O path. Note, however, that you’re not trying to simulate SQL Server I/O patterns. Instead, you’re trying to determine your I/O subsystem’s capacity. That means running tests for both read and write operations as well as for random and sequential ones. You should also test against different thread counts with varying outstanding threads for each one.

The following commands show a subset of tests you might run against the test file on the target drive:

Notice that the commands cover read and write operations (-kR and -kW) as well as random and sequential ones (-frandom and -fsequential). The -B option you have not seen before. This controls test buffering, which you can set to no buffering (N), hardware buffering (H), software buffering (S), or both (Y). We’re going with -BH to best approximate how SQL Server performs, but the default is -BN.

The next option, -t, sets the number of threads used to generate the I/O patterns. The commands specify only 1 and 2, but you might want to also include 4, 8, 16, 32, 64, or higher. The maximum is 256. The examples above also test for only four outstanding requests per thread (-o4), but you should consider mixing this up as well.

For each command, I’m also specifying 64-KB block sizes (-b64), but you’ll likely want to test other values, including 8 KB, 128 KB, and 256 KB. In addition, I’ve used the -s option to specify each test run only 60 seconds. Normally, you would run your tests longer: 5 to 10 minutes for the initial tests and longer if you identify any potential problems.

Another option you haven’t seen before is -L, which tells sqlio to include disk latency information in the results. This data can be important to better understanding disk I/O issues. You can choose to return system information (S) or processing information (P). Generally, you’ll want to go with -LS, especially on SMP machines.

The next option, -d, identifies the target drives. If more than one drive, simply specify all of them, as in -dEFG. If you do specify multiple drives in the -d option, the test file name must be the same on all target drives.

Finally, each of the sample sqlio commands includes the name of the test file, which we’ve already created. Notice we no longer need to reference the configuration file because we’re identifying the target drive and test file in our command.

Note: Microsoft recommends that you allow time between each sqlio command to let the I/O system return to an idle state. To do so, add the command timeout /T 60 between the sqlio commands in your batch file. This gives you a 60-second buffer between those commands.

When it comes time for you to actually test your I/O subsystem, you’ll likely create many more commands than what we’ve shown here. The more extensive your set of commands, the more accurately you can arrive at thresholds and identify potential issues. It’s not uncommon, for example, to run a couple hundred sqlio commands to thoroughly test your system.

The easiest way to handle multiple commands is to put them into a batch file. For example, I put the sample commands shown above into the file sqlio_commands.bat and saved it to my sqlio installation folder. I can then run those commands at any time simply be calling the batch file at a command prompt.

Running the Tests

Once you’ve created your batch file, you’re ready to go. However, when you call the file at the command prompt, you should also send the results to a text file, as shown in the following command:

As you can see, I’m simply calling the batch file and saving the results to the sqlio_output.txt file. Each sqlio command in the file will run, in the order specified, and output the results to the results file. For example, the first command generated the following results on my system and saved them to the sqlio_output.txt file:

Although you can review the general information to verify that the test you wanted is what you got, it’s the cumulative data that will likely be your focus. For each test, you’ll receive not only the throughput metrics, as you saw earlier, but also the latency metrics, which include the minimum, maximum, and average times in milliseconds (ms).

Also included with the latency information is a histogram of latency timings. The first row of the histogram (ms) provides buckets for 0 ms through 23 ms. The 24+ bucket includes everything from 24 ms and above.

The second row in the histogram (%) provides a percentage of I/Os that completed during a particular bucket. For example, in the 4 ms bucket, 8% of the I/Os completed running. Together, the combined bucket percentages should add up to 100% or close to that. If the numbers don’t match up exactly, it’s because they’re being rounded in the individual buckets.

Table 1 lists the cumulative data from each of the eight sample commands in our batch file, not including the histograms.

Command

IOs/sec

MBs/sec

Min Latency

Avg Latency

Max Latency

1

427.66

26.72

2

8

17

2

427.20

26.70

3

18

35

3

39.61

2.47

25

100

176

4

39.78

2.48

25

200

376

5

427.74

26.73

3

9

13

6

426.58

26.66

3

18

22

7

39.99

2.49

25

99

112

8

40.02

2.50

25

199

208

Table 1: Cumulative data returned by the eight sqlio commands

Because of the limitations of our tests, you’re not likely to glean much meaningful information from these results. (It is a jump drive, after all.) But you can see that read operations should result in higher throughput rates and I/Os than write operations and that write operations should see higher latency rates than read operations. You’ll likely also find that larger I/O sizes result in higher latencies, but you might also see increased throughput, although specific configurations can impact these results.

When you’re evaluating your test results, you should look for such issues as where a drive plateaus for random reads, how many outstanding requests it takes to reach maximum capacity for random writes, or where sequential I/Os plateau compared to random ones. You’ll also want to compare your results to the storage system’s specifications to determine whether your system is operating at the capacity that’s expected.

No Turning Back

The sqlio utility is easy to set up and simple to use, and it can provide you with extensive information about a disk drive’s I/O performance, before you actually deploy SQL Server. Given the tool’s price tag-free-you have little to lose by trying it out. You can perform benchmark tests for later comparison, should you run into issues, and you can determine a system’s capacity and whether it will meet your needs now and in the future. If you’re getting ready to implement a SQL Server solution, now is the time to run the sqlio utility. Waiting until after you deploy might already be too late.