The PoSh DBA: Grown-Up PowerShell Functions

Laerte goes step-by-step through the process of tidying up and making more reusable an untidy collection of PowerShell routines, showing how pipelines and advanced functions can make PowerShell more effective in helping to automate many of the working DBA's chores.

There comes a time after you’ve got used to using PowerShell to automate your DBA chores that you look at all those PowerShell functions you’ve written, and feel slightly dissatisfied. You realize, although everything works, you could, be using PowerShell in way that is more easily enhanced, reused and maintained. With some fairly simple techniques, our PowerShell functions can become more reusable. They can be made to behave much more like the built-in cmdlets that are provided with PowerShell. You will, for example, want your functions to participate in pipelines, so that you can use filters, sort the order, group objects, write out the data and take advantage of other useful PowerShell facilities. You’d probably benefit from other features of cmdlets such as standard parameters, debug, ‘whatIf’, and command-line Help. This article will be explaining how these features can help with scripting.

Reusable Scripts

If you take a look at my old scripts on Simple-Talk, you will see some functions that I wrote assuming that I needed to send the object populated with all the information. Something like :

This code will run perfectly. However, it isn’t a reusable function. I need to send the function a list of servers as a parameter but, if I need to get any information about my servers within other function, I cannot use this function to do it; I would need to write all this again inside the other function.

I would need a series of filters, the first of which outputted a stream of server objects. The next one would add the server information, the next one filtering just the information I needed, and the next one, maybe displaying it. I need to think in terms of pipelines

How does a Pipeline Work?

I suspect that you are already familiar with the phrase “I am piping..” or “pipe to”. Other Shell languages use this ‘pipe’ concept, but PowerShell is different in that you are “piping” objects and not their text-representations. The ‘pipe’ is more like an assembly line, where a stream of objects is passed from the start to the end of the line, and may be changed, filtered out, or added to. The output of one filter becomes the input of the next. It is represented by the conventional ‘pipe’ character “|”

In this line …

… I am Looking for files with the PowerShell (PS1) file-type in the c:\posh folder and my output is a directory listing like this :

Now , let’s send the output to a text file :

1485-objects.pngPowerShell uses the pipeline to send a stream of objects, thereby encapsulating all the information and methods within the object in an easily accessible way. In this case, the get-ChildItem cmdlet outputs an object called System.IO.FileInfo with all the information (properties, methods etc.) and the pipeline sends this package information to Out-File. The Out-File Cmdlet can, for example, get the date and length parameter without having to do any parsing of a text stream. It can determine the properties of the FileInfo object and access them directly.

This would suggest that, if I want to write a function that will be used with pipeline, I would just need to populate an object. However, real-life isn’t that simple. I would need to understand how to use pipes in order to write effective PowerShell scripts.

It was then that I was reminded of something I’d read in the Microsoft TechNet

“The question, however, is whether you’d want to write PowerShell scripts without using pipelines. You can order a banana split and ask them to hold the ice cream; that’s fine, but at that point you don’t really have a banana split, do you? The same is true of the PowerShell pipeline: you can write scripts without a pipeline. But, at that point, do you really have a PowerShell script? “

I needed to clear my head by doing an experiment. Let’s see this example to illustrate what I mean

… and run …

We’ve created three functions and passed them an array of integers. What Are you expecting? I’d assumed that the process was waiting until the first function had populated the entire object and sent it to the next cmdlet. I thought I’d see something like this…

For my First Surprise the output really was …

Doh! I’d forgotten that these objects would be passed in an enumeration, and that I should have iterated over the $input variable with a foreach loop.

and the output is this…

I could do a lot better than this, since I’m forcing PowerShell to batch up the input to every function until it is all processed and then pass it as an enumeration. In fact, a function/advanced function can be written to accept an input element, process it and then and pass output to the pipeline before the next in the sequence is processed. This gives you better performance and always saves on memory. I should have written code to handle the input stream this way. To do so, we’ll need some other language constructs. These are

  • Begin Block : The code inside this block, will be executed first.
  • Process Block : The code inside this block will execute once FOR EACH value piped
  • End Block : When everything is processed, this block execute once the code inside it.

The Syntax is something like this:

So firstly, let’s see what happens when our example is rewritten just using the Process Block :

And the Output is :

Each value in the sequence is being streamed down the pipeline, one at a time.

Now we’ll modify the functions using all the blocks that I’ve listed just to show you the sequence of events :

And the output is :

At this point I have discovered how I can write functions that use pipelines. I can put this to use immediately in my function, Get-MSSQLServerInfo that gets information about servers. but as I’ve already said, my function is still not reusable. How can I change this?

A function should do one thing well.

We can see from our code that I have functionality that can be generalized for a lot of other operations that I will need to perform in SQL Server. The task of making a connection is one of those.. Why not write a function to do this?

Then we could change the code in our Get-MSSQLServerInfo function to use our new Get-MSSQLServerConnection :

As you can see, I am setting the parameter $sqlserver as Mandatory (required) and with Position 0 or “named”. (for more information about parameters, type help about_parameters at the PowerShell prompt)

Now I have a function that makes a connection to a server, and it can be used in all other functions that require a connection to SQL Server.

I can, of course, get information about several servers :

But why use foreach if I can receive the instance or server name by pipeline? You’ll probably be thinking that you just need to use ValueFromPipeline in the parameters options. (for more information, type help About_pipeline)

Let’s try if it works.. Type the same server 3 times and pipe to the function : (in my case the name is 7-PC)

Are you expecting 3 rows with the same info correct ? Wrong.. the output is :

Just one line..Why? Remember the process block ? That is what is missing . So :

Finally we have the right output

You’ll be wondering why I am putting the Get-MSSQLServerConnection in the process block rather than the Begin block. It is because I need a new connection to each server. In fact, I would just use the begin block to do initialization tasks such as creating variables or loading namespaces.

At this point I’m still not entirely happy with the code. I was trying to do several processes in the one function. For each server, I was making a connection, getting the information, and then outputting it. In PowerShell, it is easier to write functions that do one discrete operation at a time.

Don Jones has several articles about this, and I’ve provided links at the end of this article.

Now my function is reusable and can now be used in any application that needs to connect to a server via SMO

But something is still missing…..

Hey Function, now behave as a cmdlet

You can do more. If you add the Cmdletbinding() attribute to your function, it will then have some of the characteristics of cmdlets. We are now well on the road to making an advanced function. You can write functions that can perform operations similarly to a built-in cmdlet. Advanced functions are written in PowerShell rather than by using a Microsoft .NET Framework language.

You can, for example

  • use Write-Debug and Write-Verbose in your function and you can control this using -verbose and -debug parameters in your function. This parameters are automatically added by the [Cmdletbinding()]
  • make the common parameters available for your function. To more info type help About_commomparameters
  • implement -whatif This allows the user of your function to find out what would happen if he executed the function. In other words, it is something like “If I drop all these tables, what would happen?”. By typing -whatif , you can have an idea that what would happen before you did it for real. ( I will show how to do this in my next article)

Finally you can see more details of the power of the [Cmdletbinding()] by typing :

  • about_Functions_Advanced
  • about_Functions_Advanced_CmdletBindingAttribute
  • about_Functions_Advanced_Methods
  • about_Functions_Advanced_Parameters

Our (almost) final code is :

Handling Errors.

Basic error handling in PowerShell is simple. There is the try/catch/finally block.
For details, see about_try_catch_finally) and this link from Steven Murawski – Error Handling

One obvious source of problems in our function could be in connection to SQL Server, possibly if I pass the name incorrectly or the service is out. This would be a serious candidate for placing in an error block.

The Comment-based Help


The first thing I do when I want to know more about a cmdlet is to get the help about it. So if my function needs to act as a cmdlet, it must be a supplier of help via comment-based help.

Comment-based Help is written as a series of comments. You can type a comment symbol (#) before each line of comments, or you can use the “<#” and “#>” symbols to create a comment block. All the lines within the comment block are interpreted as comments.


So, with a help comment-block …

…our function can provide help just like a cmdlet. If you type Get-Help Get-MSSQLServerInfo you will see the magic!.

For a complete list that what sections you can use in your help, type

The output

I suspect you will have asked yourself ‘Why is Laerte using a custom object (PSObject) to output the Server properties? The answer is that I was doing it only for the first example. It is better practice to use the live object in this case, so that you are passing into the pipe all the information (properties, methods) .from the $Server Object, allowing you to then filter out whatever information you actually need for your purpose.

The process block would then be :

And then I could use it simply in a neat pipeline


However, the custom objects are very useful when you need to combine more than one object in a single output. In this example I need to join some information about the SQL Server and the Configurations (WMI Managed Computer) of the SQL Instance to provide a single object as output.

You can see that, in order to combine the information about the two objects, the $server and $ManagedComputer, I am using a Custom Object ($object)

Displaying Object Status Updates

When your function is performing operations, it helps to have a mechanism that allows you to opt to display information on the screen that reports on progress. You can use to debug the command processing as well. In times past, this was called ‘Printfing’. With advanced functions, we can use the Write-Verbose cmdlets instead of using Write-Host.

This cmdlet writes text to the verbose message stream, and this can be switched in or out as we wish.

So we can write our code using this feature :

Do you remember what I said below the characteristics of an Advanced Function ? One of them is to enable the -verbose common parameter. By default, the message will not be displayed if we not pass the -verbose or change the value of the $VerbosePreference variable. Type about_Preference_Variables to further information.


Working with Modules

Once you have your powerful set of functions, you’ll want them to behave as if they were built in to your copy of PowerShell. To do this, you can use a script module. The script module is a PowerShell Script file that has a .PSM1 Extension. It is a useful way of sharing your functions and using them by calling them the same way as you would a native cmdlet . When you start a PowerShell Session, your modules will be loaded and you could then, for example, invoke Get-MSSQLServerInfo by typing directly from the PowerShell command-line.

One of the characteristics of a module are you can choose whether functions are exported so that they would, for example, show up in a get-module cmdlet. You can also load your assemblies just once so it is never necessary to do it within a function. You can also create aliases for your functions.

I can see that the Get-MSSQLServerConnection function is only useful in order to connect a SQL Server from inside other function, so I don’t want it to show up in the list of cmdlets and functions that can be used.

First we create a .psm1 file empty. Then we load the Assemblies.

In our function we use SQL Server SMO assembly. We just add this type :

We can now remove the begin block in Get-MSSQLServerInfo Function because we were only using it to load the assembly.

Now we add our functions :

How about creating an alias for Get-MSSQLServerInfo . We can use gsql instead :

As I said before, I only want Get-MSSQLServerInfo to be exported as a cmdlet so we add :

As you can see I am exporting the alias as well.

Ufa. Our module is done. The final code is :

Now it is only to put in the module path and import into our profile (see Jonathan article) and after that, typing GET-MODULE :


Here it is. Name MSSQLServer. Note that only Get-MSSQLServerInfo is showed because it was all we exported.

And we can type gsql too rather than the whole name . But let’s see something interesting : if I type gsql – , as I am using a Script Editor, all parameters will be shown


Wooowww .. What is this ? My function has only a Server Name parameter. Why is it showing a bunch of parameters ?

Ha! It is an advanced function remember ? Cmdletbinding ? Now we can use all the common parameters as well.

That is it, guys. I’ve explained why I believe that you can actually make life easier by introducing a little about reusable functions, how it can help your life too in order to avoid rework and some of the featured of advanced functions into your PowerShell scripts and how to create as a module. I’ve described how the use of pipes can make scripts run faster and cut out some of the complexity.

In the next article, I will show to you some of the other features of Advanced Functions can help to add more power to your PowerShell scripting studding some examples of CODEPLEX – SQLPSX library (SQL Server PowerShell Extensions) . I hope that you have had as much fun reading this as I’ve had in writing it. Any comments or ideas are very welcome


These are the Resources to this article. Not to PowerShell. I am not listing event 1% of the PowerShell Jedis that I like to read.