Simple Talk is now part of the Redgate Community hub - find out why

PowerShell SMO: Just Writing Things Once

Sometimes, you can tire of writing the same PowerShell code once again. After this happened to Phil whilst keying in an SMO pipeline to access databases, it occurred to him that he should have only one pipeline to access databases in PowerShell, one reusable pipeline

In this article, I’ll be showing you how to perform a variety of jobs on databases in a number of servers via PowerShell. Some involve simple things like getting database properties, but others involve executing SQL in all databases, or getting entire database-build-scripts. Generally, they create a single report for all your servers and databases that can then be imported into Excel or Word. You should be able to build on these examples to create a variety of other tasks. These sample jobs and reports include

  • Checking on database sizes
  • Checking on the file groups and their files
  • Checking on database sizes
  • Checking on Transaction Log autogrowth
  • Checking the number and size of Virtual Log Files (VLF)
  • Finding out when each database was last backed up (if ever!)
  • Examining table sizes
  • Creating Build scripts
  • Putting databases into source control
  • using SQL to Find all the tables in all your databases that don’t have any indexes
  • Using SQL to Find all the tables without primary keys

DRY (Don’t repeat yourself)

When a database developer uses PowerShell, it is often to perform a routine operation via SMO on a number of databases on several servers or instances. This applies to a whole range of tasks. Just speaking for myself, I’ll be creating scripts for individual tables and routines, and I’d be doing entire build scripts. I’ll probably also be doing some sort of documentation, or cataloguing, of what there is in each database, or maybe the individual database settings. I’ll also be setting up a test run, or checking a status with a DMV. I might be doing checks to make sure that indexes weren’t duplicated, or that there was a primary key. I might just be searching for a string, or checking to see which users had rights of access. All these tasks require a PowerShell SMO script that accesses all or some of the databases on all or some of my instances.

After a while, the scripts will seem horribly familiar. It will probably consist of a pipeline that takes a list of servers, often the registered servers in my case, and finds all the non-system databases. It will either pick databases from a list (a whitelist), find those with a pattern in the name, or do every database that is not in a list (a blacklist). I might want to just do databases with a particular prefix or suffix. Then it will do things with each database. All that really changes is the action you want to perform with each database.

It always irritates me to have the same code in more than one place. You’ll really only want to write this once, and then reuse it. The only problem is how.

If you are doing a long background job serially in PowerShell, it makes sense to do this in a pipeline, rather than creating potentially huge collections of large objects, because this is kinder to the memory management of .NET. Each server object is passed down the pipeline, each to have its collection of databases passed on, one at a time to be filtered and worked on and tidied out of the way.

All you need to do is to inject the action you specify into the right part of the pipeline. To demonstrate how to do this with anonymous functions that we assign to delegates, we’ll step back slightly from the problem of accessing servers and their databases via SMO, and we’ll forget any fancy stuff like functions. We’ll demonstrate a pipeline rather differently by creating a PowerShell script to generate the lyrics for ‘Ten Green Bottles’. Then, we’ll change it slightly without altering the pipeline itself in order to generate ‘There were ten in the bed’. Believe me, this will help you understand the meaty stuff later on.

Ten green bottles

We’ll introduce a filter that decides the number less than, or equal to, ten at which to start the great poem. Then we’ll make the whole pipeline do something else without changing the code in the pipeline itself.

What we’re actually doing is using delegates. We are using those scriptblocks that we assign to variables as anonymous functions. We’re injecting functionality into the pipeline. We haven’t changed the code of the pipeline itself.

PowerShell can be adapted very well to the sort of programming that works by injecting functionality into a pre-existing structure. It is, after all, a dynamic language. In our case, this structure might be a pipeline that selects all your databases, one after another you and allows you to do what you need to each database.

This does most routine tasks, but we need to add a few things, just as we had to with the poem. The $Serverfilter and $DatabaseFilter are variables containing a scriptblock. They have a parameter, which is $_, meaning the current object in the pipeline. This parameter isn’t strictly necessary, since it would work by using the $_ parameter within the scriptblock, but it makes things easier for debugging, or if you have a complex script. This is equivalent to a C# delegate, but without the rigorous type-checking for the parameter you pass. With the poems, the filter didn’t change. It decided how many bottles we start with, or how many children in the bed. In this case having the filter code in a variable now allows us to place the actual filter to let us choose the database when we are running the script. We might want to specify a ‘white’ list of databases that we want to operate on, or we might want to operate on all but a ‘black’ list of databases. We might want to select only those that conform to a wildcard or regex string.

In our case, we’ll just insert a whitelist or a blacklist if one has been specified. Logically, you can’t specify a whitelist if you have a blacklist: it is one or the other, or neither.

If you then invoke this in the process part of a scriptblock, it will filter out all the unwanted database objects as they appear, and only pass on the ones you want.

You may have already categorised your databases with special prefix such a ‘Test’. In this case, you can use a different $Databasefilter filter to get only these,

Or if you want to avoid them then

You might want to just do part of a list of databases, specifying the start and end database, just like the poems. Who knows? You can also use blacklists or whitelists for the list of servers in your care, or your registered servers, if you prefer.

Why do it this way? It is because you do not need to alter the code for the pipeline, you are ‘injecting’ your filter into it. You have your code in just one place, without duplication. With a delegate we can specify the functionality, and therefore what actually happens, at script execution-time. You can have a different filter depending on the specific parameters . To do it using conventional conditional logic would end up with a rats nest of ‘if’ statements. One can, if necessary, be even cleverer and add as many tasks as you wish to perform once the database is selected, but we won’t overdo things at this point.

So, we’ll create the pipeline as an ‘advanced’ function, so that we can provide help and allow ‘debug-printing’ and ‘voluble’ messages for debugging via Write-Debug and Write-Verbose. This means that even your anonymous functions will  be able to use these features as they are used within the same overall function. We’ll save this function in the same directory as all our code that actually does things.

So, without actually doing anything harmful to any database, lets  test it out.

Once you’ve got this working to your satisfaction, and you’ve added any extra filters or other functionality you need and tested it, then you can start doing something useful. In the following examples, I’m just checking for AdventureWorks (I’ve put in a filter to select only those databases starting with ‘ADV’, but your requirements will be different!)

This gives you a report that gives you the space available to each ‘Adventureworks’ database (or whatever you specify  on the servers you specify, broken down by data and index usage.

Server Name DataSpaceUsage SpaceAvailable Index SpaceUsage\sql2005 AdventureWorks 84160 15280 78008\sql2005 AdventureworksClone 66 824 43320 70696\sql2005 AdventureWorksDW 43632 15992 23096\sql2008 AdventureWorks 104744 1040 78992\sql2008 AdventureWorksCopy 869 36 4992 77464\sql2012 AdventureWorksDW 43744 16200 22536\sql2012 AdventureWorksDW2008R2 45544 16168 26944\sql2014 AdventureWorksLT 3112 168 1632\sql2014 AdventureWorksLT2008R2 3192 16488 1632\sql2005 AdventureWorks 86032 2468856 80656\sql2005 AdventureWorksDW 43616 0 23088

the file groups and their files

Here we drill down on detail to find out (for all our databases on all our registered servers this time)  how large each MDF file is in each group for each database. We see what group they are in, whether they are allowed to auto-grow – and if so the amount of growth, the type of growth, size of file. amount of space used and so on. This should help you to spot those MDF files where things are getting tight.

Checking on the log files


We can also check on log files on all our databases for all our registered servers (or whatever you wish, of course).  We see whether they are allowed to auto-grow, and if so the amount of growth, the type of growth, size of file. amount of space used and so on. This should alert you to such problems as long-running transactions.


We can also very simply look at the Virtual Log files for all of the databases’ transaction logs, using a DBCC LogInfo directly.

Checking when your databases were last backed up

With a few alterations it becomes a check on when they were last backed up. After all, the function is returning the databases as SMO objects and so you can pick anything from  SMO: Database Properties or even (I’ve just shown you how!) from the SMO: Server Properties.

Server Name Recovery LastBackupDate LastLogBackupDate\sql2005 AdventureWorks Full 15/11/2012 14:26:00 01/01/0001 00:00:00\sql2005 AdventureworksClone Full 01/01/0001 00:00:00 01/01/0001 00:00:00\sql2005 AdventureWorksDW Simple 01/01/0001 00:00:00 01/01/0001 00:00:00\sql2008 AdventureWorks Simple 01/12/2012 09:14:59 01/01/0001 00:00:00\sql2008 AdventureWorksCopy Full 01/12/2012 09:15:02 01/01/0001 00:00:00\sql2008 AdventureWorksDW Simple 01/12/2012 09:15:04 01/01/0001 00:00:00\sql2008 AdventureWorksDW2008R2 Simple 01/12/2012 09:15:06 01/01/0001 00:00:00\sql2008 AdventureWorksLT Simple 01/12/2012 09:15:06 01/01/0001 00:00:00\sql2008 AdventureWorksLT2008R2 Simple 01/12/2012 09:15:07 01/01/0001 00:00:00\sql2005 AdventureWorks Full 20/09/2012 13:12:25 01/01/0001 00:00:00\sql2005 AdventureWorksDW Simple 01/01/0001 00:00:00 01/01/0001 00:00:00

Well, that gets a lot of standard database reports out of the way, and with  Select-Object, you can do a lot with calculated fields.  

How about generating a build script for all the databases of your registered servers, that is saved to a directory corresponding to the server?

In this case, we’ll use $JobToDo, which is a parameter that comes in handy if you like using such things as Write-Debug and Write-Verbose or if we want to do entirely different things depending on some condition. This is because the scriptblock is now executed in the context of the function. We also need a variable to specify where to save the files.  In this case, I want a blacklist because I don’t want to script things like Adventureworks (In this example, I’ve taken out a lot of specific databases, ovbiously)

Running a SQL Script on rather a lot of databases

That was pretty easy.  What about executing a SQL Script in a number of databases in a number of servers?  Right away, sir.  

Generating a single report from running the query

Let’s start with a simple example that just gives you a report of all the tables in all your databases that don’t have any  indexes. (here, we’ve just restricted it to Adventureworks)

Generating a report for each database

Now, we’ll do a slightly more complicated routine. We’ll write some files, one for each database, listing in CSV all the tables in each database that are heaps, without primary keys. You’ll see that it would be dead easy to use this for a variety of reports, just by changing the SQL, (and SQL Title). I use this method for testing code on a variety of different databases, servers and collations.

Popping plenty of databases into source control

fine. Let’s do something slightly trickier. There is a script that generates the source, one file per object, of the databases of your registered servers. Actually, as it stands, I’ve only included those databases that start with the word ‘Test’. I suspect you’ll want something slightly different. I’ve put each server in its own directory, of which each database is in its own subdirectory. Each object type (e.g. TABLE) is in its own directory within this. From this, it is a mere bagatelle to pop these into source control, but we’re not scripting that for you since I don’t know what Source Control Manager (SCM) you’re using.

So there you have it; a simple pipeline for serving up whatever databases you specify, into which you can inject whatever functionality you need. Your requirements might be different. Your servers could be outside the windows domain and therefore accessible only by SQL Server credentials. You may need to do slightly different things to different databases. You may need to specify a process to be done at the sever level as well as the database level. In which case, hack it about and come up with something that fits your requirements. That’s the joy of a scripting language like PowerShell..

In case things don’t work.

  • Unless you have servers called Dave, Dee, Dosy, Beaky, Mick and Tich, you’ll need to change a list like …
    $DataSources=@('Dave','Dee','Dosy','Beaky','Mick','Tich') # server name and instance
    into a list of your servers.
  • If your servers are outside your domain, then you will need to change the foreach-DatabaseInServers code suitably.
  • If you haven’t got the latest SQLPS module, get it from Chad Miller’s ‘Making A SQLPS Module’ blog
  • Make sure that you have the SMO library installed!
  • in some cases I’ve just selected AdventureWorks, just for the demo. You won’t want that when you come to use it.
  • Use the code in the speechbubble at the top of the page
  • if you have only one item in the servers, blacklist or whitelist, still use the array syntax  e.g. @('Adventureworks')

How you log in to Simple Talk has changed

We now use Redgate ID (RGID). If you already have an RGID, we’ll try to match it to your account. If not, we’ll create one for you and connect it.

This won’t sign you up to anything or add you to any mailing lists. You can see our full privacy policy here.


Simple Talk now uses Redgate ID

If you already have a Redgate ID (RGID), sign in using your existing RGID credentials. If not, you can create one on the next screen.

This won’t sign you up to anything or add you to any mailing lists. You can see our full privacy policy here.