Simple Talk is now part of the Redgate Community hub - find out why

Monitoring SQL Server Virtual Log File Fragmentation

One of the delights of PASS is to be able to pick up ideas from some of the presentations and recombine them in new and interesting ways. Tom recounts how he used two different insights to solve a problem of monitoring a large number of servers for signs of Virtual Log-file Fragmentation.

During the most recent PASS Summit in Seattle I attended a handful of sessions from some of the legends in the SQL Community. One talk was by Paul Randal on logging and recovery where he spent time discussing virtual log files (VLFs) and how you should be mindful of them and  their performance implications. Another talk was by Buck Woody: He was showing off a handful of new features in SQL 2008 and he quickly demonstrated a PowerShell script that populated an Excel chart to quickly give a visual representation of the row count in your database tables.

About a week later I found myself wondering if there was a way to incorporate both of those ideas into something slightly novel: I wanted to create a report to quickly display the current state of the number of virtual log files for every database I administer on a particular instance.

Why the interest in VLFs? In MS SQL Server, your transaction log may look like one file on disk, but inside that file there exist these virtual log files, splitting your physical file into virtual chunks. For an example of this, run the following:

The output will be the number of VLFs for the current database. You can see the size of the virtual chunks, the sequence number, the status, and even some LSN information. For a wonderful discussion on VLFs and transaction log throughput see Kimberly L. Tripp’s blog post at

Getting details on the number of VLFs for your database is an easy way for you to quickly tune a piece of your environment. Knowing this could be a “quick win” I set out to gather metrics so that I could form a plan of action.

Gathering the Details

In the past, when I have wanted to find out details about all of my servers I have resorted to running a multi-server query using a Central management Server. As good as this can be for me there are two drawbacks for me with the use of CMS. First, I can only select an entire group of servers; it is not possible for me to select a distinct subset of servers defined in the CMS. If I have the need for a query against such a subset then I need to define a new group, which can be tedious at best. Second, the results would need to be massaged manually should you want to put them into Excel (or something similar).

About a year ago I started to become romantically involved with Policy Based Management. Knowing that PBM could also be a possible solution I set about trying to configure a policy that would go out and capture information of all of the VLFs in our shop. It took me very little time to get the policy built and it worked very well. In a matter of a few minutes I was able to quickly see exactly what databases had more than 100 VLFs. The only problem I had was that it was very difficult for me to share this information with anyone else unless I invited them into my cube and showed them my screen after the policy was evaluated. Sure, I can save it as XML, but that was not very helpful. What I really wanted was what Buck Woody had; a bar chart graph in Excel.

Knowing the Buck had used PowerShell (POSH) in order to achieve those results I decided to roll up my sleeves and dive into POSH to see what I could accomplish. I wanted to keep things simple for now. My requirements were to create a report against a single instance that would display the top ten databases with more than 50 VLFs and display the results in a bar chart graph.

My First Time with POSH

After seeing Buck Woody perform his magic at PASS I asked him if he would post the details of his script to his blog (you can see the post at I told him how his demo was inspiring me to create something similar for VLFs. So, I had an example to review to help me get started with POSH, but what next?

For me the next step was to download PowerGUI ( I like having GUIs to interact with and would always prefer a GUI to working strictly through a command line. PowerGui also has a Script Editor which allows for me to have a quick syntax checker, a valuable tool for someone new to POSH such as myself. After downloading the tools I was ready to get started.

The first thing I needed to do was to figure out a way to get all the VLFs for the current instance. I tried to find VLF details inside of the POSH structure for a SQL instance but could not, so I had to find a way to get the job done in T-SQL and then use that inside of the POSH script. The only details I had were

  1.  I knew the DBCC command

  2.  I knew I needed to do this for each database independently, and  …

  3. I needed to limit my result set otherwise the graph in Excel would look congested.

The script I came up with is as follows:

The next thing I did was to modify Buck’s script so that it would open a connection to a SQL Server. I did this so that in the future I would be able to easily pass a parameter to my script to have it connect to any instance I want. I could also pass in a parameter for the number of VLFs to check for, if I so desired.

The beginning of my POSH script now looked like this:

Replace ‘servername’ with the name of your target server and the script should connect as long as you are running the script with credentials that can access the instance. The next part of the script defines the command text and populates a dataset:

Now I wanted to change things up a bit from Buck’s demo. I am no stranger to bars, but I like pie as well. I wanted to have Excel show the results in a pie chart. This took a little digging to find out how to get it to work, but it was not very difficult in the end.

When you put the script together you will notice that, upon execution, the initial graph displayed is a bar graph which is then followed by the pie chart. It may take a few seconds for the pie to be displayed on your screen. And if you want to have more fun, then change ::xlPie to ::xl3DPieExploded and add the following lines to the end of the script:

This will create a 3D exploded pie chart that rotates.

Action Plan

So, you have a report LISTING out the VLFs for the databases. What actions are you to take next? How do you go about correcting the issue? And I do not mean just correcting it at the moment, but correcting it so as to reduce the chances of it happening again?

Most of the time excessive VLF fragmentation is brought about by excessive file growth at small intervals. For example, a database that is set to grow a transaction log file by 5mb at a time is going to have a large number of VLFs should the log decide to grow. The following chart explains how many VLFs are added based upon the size of the growth.


Number of VLFs created

<= 64Mb


>64 but <=1Gb




This means that if your log grew by 5Mb at a time, internally the physical log files would create four virtual log files, each 1.25Mb in size. Now, if your log grew by 100Mb at 5Mb at a time, you would have 80 VLFs, each at 1.25Mb in size. However, if you grew the log by 100Mb at a time instead of 5Mb, you would have created 8 VLFs each at 12.5Mb in size. Your VLFs would be ten times larger in size and you would have only grown your log file once instead of twenty times.

While you may not always know exactly how large your transaction log file should be, if you keep the above numbers in mind you can make a concerted effort to keep your VLF count to a minimum. Simply review the autogrowth settings for your transaction log and make certain it is sized appropriately.

If you come across a database with more than 50-100 VLFs then you should look to take action in order to increase the transaction log throughput. Kimberly Tripp does a great job of detailing what needs to be done in the above blog post and I will summarize again.

  1.  Backup your transaction log, even if you are in simple mode, in order to clear all activity.

  2. Shrink the transaction log to as small as possible.

  3. Alter the database to modify the size of the transaction log and configure your autogrowth, keeping in mind the above chart.


Good ideas come from anywhere at any time. They often sneak up on you when you least expect it and can be sitting in front of you without you even noticing they are there. Such is the case with VLFs and using POSH to build charts in Excel. I have known about POSH for years but never dived into just how useful it could be for me until I saw Paul and Buck at PASS last month. Sometimes it is a piece of info from here and there that allow for you to develop solutions that are customized for your needs at the time.

The idea that I can be using POSH to quickly create graphs that I can then use to give to my customers makes me excited to learn more about how I can utilize POSH in other ways. I find myself thinking about a lot of different ways I can incorporate POSH into just about everything I administer. Disk space usage, I/O throughput, missing indexes, just about anything I have a script for in my toolbox I could be putting into reports quickly for others to understand a little bit easier.

And then things should evolve from there.

How you log in to Simple Talk has changed

We now use Redgate ID (RGID). If you already have an RGID, we’ll try to match it to your account. If not, we’ll create one for you and connect it.

This won’t sign you up to anything or add you to any mailing lists. You can see our full privacy policy here.


Simple Talk now uses Redgate ID

If you already have a Redgate ID (RGID), sign in using your existing RGID credentials. If not, you can create one on the next screen.

This won’t sign you up to anything or add you to any mailing lists. You can see our full privacy policy here.