The Mindset of the Enterprise DBA: 8 Ways To Centralize Your Work

Although it is possible to provide good service to small to medium companies as a DBA without putting in place systems and processes to manage the workload and the demands of teamwork, it isn't so in the larger enterprise. As well as standardizing and documenting, the enterprise DBA needs to centralize the management of systems. Joshua Feierman explains.

   On Adopting the Mindset of an Enterprise DBA
  The Mindset of the Enterprise DBA: Creating and Applying Standards to Our Work
  The Mindset of the Enterprise DBA: 8 Ways To Centralize Your Work
  The Mindset of the Enterprise DBA: Harnessing the Power of Automation
  The Mindset of the Enterprise DBA: Delegating Work


In the previous installment of my series on adopting the mindset of an Enterprise DBA, we discussed the practice of Standardization. Based on the comments, I think we all agree that while this practice is sometimes a bit boring or cumbersome, it is worthwhile. In this installment, we move on to the second practice: Centralization.

Defining Centralization

Centralization is about defining and keeping your stuff in as few places as possible, preferably in one. To use a metaphor we database professionals are likely to understand, it is like normalizing database structures. When we practice good normalization, we ensure that information is not kept in several places; the same is true for centralizing our work and systems. There are several advantages to this practice, but two stand out in particular:

  • By keeping things in as few places as possible, we make maintaining them easier since we only have to touch them once.
  • We always know where to find things.

The practicalities of Centralization

Many of these could be their own article, so I’m not going to go into too much depth, but will provide links to relevant information whenever possible (and thanks to the wonderful SQL community, it usually is).

Keep Your Documentation in one Place

Do you remember all that documentation you produced in part one? (You did write all that down, right? If not, go do that right now. Go on, I’ll wait. Yes, it’s that important.) All that effort will do you no good if it’s scattered around in different places and not organized logically. It’s not so much where you put it, just that it’s all in one place. Do you happen to like SharePoint? Great, make a site and a document library and put it there. Are you more partial to network drives or maybe just don’t have anything else? No worries; put everything in some folders organized in a way that you will understand. Again, it’s not how you do it, only that you do it at all.

Build a Repository

Every enterprise-minded DBA should have a repository. By ‘repository’, I mean some kind of database in which you keep useful metadata about your systems. Here are some examples of things to store in the database:

  • Server Names
  • Build numbers (a.k.a. patch level, both OS and SQL)
  • Usage – who uses what databases on the server and how do you contact them.
  • Environment – is this a production or test server?
  • Special instructions – is there anything special or unusual about this server?

The way that you structure it isn’t important, just so long as you keep it updated religiously. I don’t want to skip ahead too much, but you’d be wise to figure out a way to automate the updating process so you never have to worry about forgetting to change some detail. Believe me, this will come in very handy when a server goes down in the middle of the night and you need to quickly send a notice to all parties affected by the outage (or know if the outage is even an issue at all, perhaps the server is only in use in the daytime). If you want some pointers in this area, a good place to start would be something like Rodney Landrum’s SQL Server Tacklebox or his SQL Server Pro article on the subject.

Use Source Control

Here’s a place where we can take a point from the developers. If you are keeping a repository of scripts (you are, right?), they need to be in some kind of version control system. Here again, which one you use isn’t important (though if you insist on using something like VSS you may want to seek therapy), only that you actually use it. There are numerous advantages to using source control, but two stand out:

  1. You have a built in backup for when changes are made, as well as a way to get an old version of things if need be. Say, for example, if you suddenly find a bug in the version of your maintenance package for SQL 2000, and you need to fix it for that one last holdout. Without version control you would have to manually script everything out.
  2. You have a built in mechanism for pushing changes out to other members of your team. All they need to do is update their checked out copy, and all the changes are pulled down immediately. No more copying things out from shared folders or the like.

If you’re ambivalent about which version control system to use (and you don’t have a corporately blessed one), I would suggest either Mercurial or Subversion. Both are very highly rated and I find them both easy to use and robust. Subversion in particular has a great set of documentation and tutorials.

Use Central Management Server

Central Management Server, or CMS for short, is a great feature of SQL Server that I sadly don’t see a lot of people using. It lets you define groups of servers, which are stored in (of all things) a database on a central server. You can do things like execute statements across multiple servers, but honestly I find that the simple organization value alone is worthwhile. If you’re smart, you could even drive your repository off the CMS database, which is actually just a few tables in MSDB. Setting one up is easy and well documented. My one caution would be that you want to ensure your CMS server is highly available, as it clearly won’t do you any good if it’s not accessible.

Learn PowerShell

I think that PowerShell is, beyond any other language, far and away the best tool for writing centralized utilities. What do I mean by that phrase? Simply put, the utility runs in one place, but connects remotely to other network resources to gather information about them, or to accomplish some task. PowerShell is excellent for this purpose because:

  1. It exposes WMI in ways that are far easier to understand than other languages like VBScript.
  2. It includes some very nice built-in remoting capabilities that help you execute commands remotely. Perhaps not quite as useful for a DBA as for a sysadmin, but gravy nonetheless.
  3. With a little help, PowerShell can have some very powerful parallel execution features that let you run things on multiple threads, thereby getting things done faster. Just imagine the difference between running a query against thirty servers one at a time, versus in batches of ten at once.

Here’s a good example of how I used PowerShell in this manner. As a DBA, I wanted to get a morning status report on all my servers, to show me things like failed backups, databases approaching their size limit, or disk drives that were nearing capacity. Rather than rely on distributed alerts which, as wise DBAs have pointed out, can quickly overwhelm your inbox and give you multiple points of failure, I wanted one message every morning, so that I could see the state of things immediately after brewing that oh-so-important cup of coffee. So what’s a DBA to do? I created a PowerShell based framework (based on Alan Renouf’s excellent vCheck) which connected to my CMS server, spun off a bunch of threads, and got all my information in one place from my various servers. Problem solved.

Build a Tool Server

One of the most frustrating things about getting a new computer (or having an old one remade) is installing and configuring every piece of software required. The good news is, this one is easy to solve: get a server (it doesn’t have to be a physical box, in fact I’d recommend a VM for failure protection), install everything you need there, then use it as much as possible for daily work. Yes, this means that you have some overhead of connecting in, but in today’s world of remote desktop and application functionality, I’d argue that is minimal. In addition, you no longer lose your ability to work if your laptop goes kablooey. As a bonus, you can usually avoid the headaches around things like firewall rules or cross domain trust problems.

Use Policy Based Management

Policy Based Management, or PBM for short, is a framework developed by Microsoft to allow you to ensure compliance with rules setup in a central repository. It’s a very powerful tool, though from my experience it does take a little getting used to. You can do both simple checks for compliance, as well as enforcement of rules (i.e. preventing a change from occurring), which is done via DDL triggers. PBM ships with a fairly comprehensive set of pre-built rules for looking at general best practices, but you can and should build your own.

Use SQL Agent Master Servers

A very powerful feature of SQL Agent since SQL Server 2008 is the concept of Master and Target servers. This functionality allows you to define and configure SQL Agent jobs in one master setup, which is then pushed down to multiple targets as you define. Think of it this way: you probably have one or more SQL Agent jobs which you need to run on all your servers one the same schedules. Perhaps you have monitoring jobs for things like TempDB space usage, or other things, and certainly you’re going to have common maintenance tasks that need to be executed such as backups, DBCCs, and index maintenance. Rather than set them up individually on each server, you can define them once on the Master server, then push them down to all your targets. This way, when (not if) you need to make changes, such as adding a job step or tuning a parameter, you can make them once, rather than across each and every server. This works very well with several of the robust maintenance frameworks out there, such as Ola Hallengren’s award winning set of scripts.

Concluding Thoughts

While centralizing our work may take some effort up front to get all the supporting pieces in place, the payoff is almost immediate. Knowing where everything is, being able to update it once, and having something approaching the “keep it in one place” model increases our efficiency and gives us peace of mind. In addition, it lays another block in the foundation of preparing for our next (and critical) step in the path to becoming an enterprise DBA: automation. After all, automating is far easier when you can set it up once, rather than multiple times in multiple places. This step, combined with the next, will probably have the greatest effect on your daily workload, in that you’ll start to see your time free up since you have less busywork to do. Stay tuned and stay focused.

Further Reading

Policy-based Management

Central Management Server


Source Control for DBAs