The PoSh DBA – Towards the Re-usable PowerShell Script

When starting out with PowerShell, it is hard to escape from the detail to work out the best strategy for creating scripts. Laerte explains how, when and why it pays to think in terms of versatile functions to meet varying demands.

How would you improve a PowerShell script in order to make it more versatile? How can you ensure that it is adaptable to the many similar tasks for which you are likely to need a script? In a nutshell, you turn into functions those common component tasks that you’ve written as scripts, and from these simple functions develop advanced functions. In doing so, you make the leap from PowerShell Zero to PowerShell Hero.   In this article, I’ll walk you through a practical example and demonstrate the steps along the way

Before we get too deep into the details, let’s start by providing a few general rules for developing in PowerShell.

The General Rules

There are the rules that I use in my day-to-day work in PowerShell. They’ve worked well for me over the years, but I’m not saying that they are carved in stone.

  • Learn the language

    You need to treat PowerShell as a serious .NET language. Like C#, F# or VB, it is impossible to know what the language can do, such as its features or commands without understanding the language and its paradigms? You can cut and splice other people’s scripts but you must have a good feel for the way that the language works before you can proceed any further
  • Use The Help

    For PowerShell, the help system is the first thing you must reach for: it must be your best friend. The designers of the language intended it to be well-used.
  • Use the  PowerShell Community

    The PowerShell community is unique, because it has people who have come from a wide range of IT backgrounds. They bring their experience and wisdom with them. They will know more than you.  The combination of skills multiplies the speed at which the language develops. Read their posts, download their script and learn from them.
  •  Keep It Simple
  • Do not use Aliases, except for deliberate obfuscation.
  •  Write with consideration

    Do not try to cram all your scripted process into one line.  In the Shared and Corporate environment other people will maintain your code and will not necessarily have the same PowerShell knowledge as you. Be kind in your code.

When do not use PowerShell

Once you’ve gained confidence in PowerShell, there is a temptation to use it for everything. It is definitely not the solution for everything in SQL Server, because we already have a great deal of power in Transact SQL. One of the major skills for a Database person that uses PowerShell is to know when not to use PowerShell.

PowerShell is the primary script language for Windows-based systems, providing a solid platform for automation, provisioning servers and so on. In a SQL Server, it is no different. You can do practically everything with PowerShell, via SMO and DACFx, but it worth it the time and effort?

The answer to this question will hit you square in the face if you ever try to create a table using PowerShell and SMO.  ‘Ha Laerte, but I can use T-SQL called by PowerShell for that.’ Yes! You can but unless you need to automate, scale out, or repeat this operation, it is better to use SSMS and T-SQL.

Oh Boy! Script or Function?

All script files are actually functions. You can call them from another script by referring to their filepath, and also provide parameters to them.

Technically speaking, a function is similar to a script variable, which is really an anonymous function or ‘closure’. What we refer to as a function is only a script to which  you added a function keyword, its name, some parameters, some  keywords and voooalla !!! We have a function that can be called by name.

A working IT professional will, if writing a single-purpose  solution, generally write a script. I did this in my latest article on automating SQL Server Best Practices Review in Word. This is because:

  • It´s a single solution
  • Whoever will use should not be required to know anything about PowerShell, so If I had written a module with a lot of functions, this person would have had to perform an extra step to copy the module function to its correct path.
  • It does not need to be fully-reusable

If, however, you will use your script for more than one purpose, with flexible outputs, especially if you will use it in other scripts then make it reusable. You’ll need to switch to using a function and add this function to your function module. If you are unsure how a set of scripts will develop, but it is likely that they will be extended and enhanced, you will need to use functions.

From Zero

Imagine this. Your devoted boss comes to you, with a couple coffee for you, Brazilian of course, and with an ingratiating smile.

“Could you please get the Name and Recovery Model of all of our databases and instances and export them to a csv file for me, if it isn’t too much trouble? I need it for the Business Recovery plan.”

“Of course, boss. The sight of that coffee you’re offering me makes me putty in your hands”

And then you script  it :

This code will export the Name and Recovery Model for each database within all SQL Server instances whose names are contained in the file Instance Names.TXT and write the data to a file called InstanceName_DatabaseConfigurations.csv .

You will, at this stage, deliver the file, and save the script for later use. If you will use it just one time or even if you then adapt it for your best practices report, it is fine, both technically and conceptually. It is a waste of time to develop it further. However, on reflection, you may suspect that you’ll need something very similar to determine more information about the database configuration information. You could, of course, just re-work it for each occasion; it will keep working technically of course, but you will regret it. Let us check why:

Success breeds requests

Your boss pops his head around the door, He has a glazed smile.

 “Could I please trouble you  to show to me the same configuration information for the  servers remote office in Sao Paulo? It turns out that these need to be in my report. I’ll need to know the owner as well, since this isn’t documented for that office”

Ha. You will probably need to :

  • Make a copy of your original script
  • Re-code the copy of the script to support his conditions

Sometime later, he’s back:

“Excellent, excellent, and I just hope that it isn’t going to be too much effort to get the owners of all the databases, not just the ones in the remote office. The DR planners also need to know the size of the databases to get an estimate of recovery time. While you’re in the mood for this, the BI team need to know the collation for all the databases contributing to the warehouse. “

Holy cow! You will have at least four scripts to perform practically the same operation. It looks as if it is time to take a step back and provide a design that allows you to get any configuration item you want from any combination of servers and databases

To 25%

Identifying the component tasks

Lets study that script. It is connecting to each server and returning configuration information from SMO’s Server class and storing it in a $SQL Server object

This class contains a lot of information about the SQL Server Instance, all of which is relevant to the server level. Let’s just see what is in it by simply getting all the members of the class.

2344-clip_image002.jpg

Hmm. There is a heck of a lot of useful information in there. Therefore, if our devoted boss by chance returns and wants some more information about it, I will know where to find it, and have access to it.

That script will also return ALL information about the SMO Database class and return all database objects: Let’s see what is available from the database class.

2344-clip_image004.jpg

 Yes, there is plenty there that is useful information. At this point, we can see the potential advantages of splitting that script into two functions, one to connect and return the SMO Server class and other one to return each SMO Database class. Let us do it

Advancing to 50%

First we need to separate the code to the two functions: One to connect and return the Server SMO class, and other one to return the SMO Database collection.

Server SMO

Database SMO

It is the Server SMO Plus Databases SMO

What code is common in the two ones? The connection and SMO Server: This points to the advantage of calling the Server SMO Function inside Database SMO Function.

Why, at this point, am I not considering the task of exporting the data as CSV, displaying it on the screen or even choosing the database properties to display? This is because a function is best if it does just one thing very well. In this case, the SMO Server function just needs to deal with contacting the SMO Server and the same applies to the Database SMO function. They will both need to do it robustly, eventually with appropriate error handling. The idea is to use the features of the language such as the pipeline, the native output cmdlets (export-…., format-..), display cmdlets (write-..) and select property cmdlets (select-object).  They are all there to help.

Before we start, we should decide …

  • The Function name:  It should make it obvious as to what the function does. Use Approved Verbs – Approved Verbs for Windows PowerShell Commands
  • The Parameter Keyword and Parameter Name

    Use a parameter name that says for what it is. For instance, it’s a SQL Server Instance Name, so ServerInstance should be nice. use common sense please: Don’t use names like ThisIsTheParameterToUseSQLServerInstance
  • Datatype of the Parameter.

    Choose  the type carefully and specify it explicitly. If it is an Int, use int, if it is Boolean, use switch – why switch?  In PowerShell, we rarely use Boolean as parameter, because we have a special and more appropriate datatype for that. The Switch one. It works like a n on/off light-switch. When the parameter with a switch datatype is typed, PowerShell uses like ON. When it is not passed, PowerShell assumes OFF. Again, learn about the language.

We have the functions Get-SQLServer and Get-SQLDatabase

SQL Server Server SMO

Database SMO

Because  Get-SQLServer is  now a reusable function, so I will reuse-it.

And so we will test it.

Wait! Wait! wait!, it is missing something. Get-SQLDatabase needs to have a parameter to choose the database name of course ; If it is not passed, assume all databases.

Time to go back in our first script. Let’s rewrite it using our functions :

With these two functions, we can select whatever data we need, not just the Name and Recovery Model, using Select-Object. We can order the report as we like, and export the result in a variety of formats with the ‘ConvertTo-‘, and ‘Export’ Cmdlets. Your boss can ask you to export to a text file, xml document, whatever he wants;  and better, can choose what property he wants. You just need to change your final script, not the core of your solution. We can improve the core solution in just one place.

Getting to 75%

One of the most important native cmdlets is Get-Member. It shows you all the members – properties, methods, events and so on –  of the object. So far, we’ve used it to check what members we can use to  output from our functions  :

2344-clip_image006.png

The Pipeline

‘Hey Laerte, that pipeline is cool! I am piping the stream of  objects produced by the Get-SQLServer object to the Get-Member cmdlet, so can I pipe the Instance Names to Get-SQLServer?’

Not yet..lets just  go to 75%.

What allows  us to pipe information to cmdlets or functions is a very cool advanced parameter argument called  ValueFromPipeline and ValueFromPipelineByPropertyName. In PowerShell 3.0 and later you can just specify this argument in the parameter keyword, in the version 2.0 you need to assign $true to that. Here is an example of each technique

PowerShell 2.0

PowerShell 3.0 and later

Advanced Functions

‘Is that all there is to it?’

well, not yet. We know that PowerShell streams objects using the pipeline, but we may need to do some preliminary work at the start of the flow of objects, and maybe something else at the end to wrap up. If so, then we also need to add some block keywords to deal appropriately with the steaming information.  For a further information, check The PoSh DBA: Grown-Up PowerShell Functions

Basically as I said in the article I’ve indicated that …

  • Begin {<Block>} : The code inside this block, will be executed first.
  • Process {<Block>}  : The code inside this block will execute once FOR EACH value piped
  • End {<Block>} : When everything is processed, this block executes the code inside it once.

Our function Get-SQLServer could have :

  • Begin Block to load the assemblies
  • Process Block to get the piped value
  • No End Block – We don’t need it

And we can pipe the SQL Server Instances to our function or take them from a list in a file:

If you’ve been following the reasoning, you will realize that, if you use the intellisense from PowerShell ISE, you will be able to see a list of all the possible parameters when you start to type and you will see some other parameters that we haven’t  mentioned.

 2344-clip_image008.png

Those guys in the intellisense list (‘verbose’ downwards) are called common parameters. As the name says, they are common to every cmdlet – or function that now acts like a cmdlet.

“What ? What ? You mean a function that acts like a cmdlet ?”

Yes. They are called ‘advanced functions’ and you can change to using them by two different ways :

  • Specifying the cmdletbinding()
  • Just adding the parameter argument inside the param keyword as we did. It will then assume the cmdletbinding()

Again, there is a lot more on the advanced functions. You can check some on The PoSh DBA: Grown-Up PowerShell Functions  

Ok. Our functions already accepts pipeline input but have you realized that, in some cmdlets, there are some parameters whose type is string[] ? Not only string, they have the [] also.

This means that this parameter accepts an array of strings.

We can do it in our code too. Let’s  just change the parameter to string[] and also as now it accepts an array, using a foreach statement to support the code :

And we can now pass an array of Instances to the parameter serverinstance :

Yes baby. Now we have pipeline input, parameters that accept arrays and common parameters. We haven’t got to 100% yet. To do that, we need a little more, but very important: Error-handling:

Reaching 100%

Errors

We need to handle the errors inside the function. Let’s try and catch them :

Why this is so important? We have the common parameter ErrorAction, and we can easily deal with the flow of the script with trycatch blocks and the ErrorAction parameter

 Perform the test :

This code will stop the script rather than catch and recover. It will use the error handler inside the function. If you want the error to be logged but prevent the script from stopping, you may need to use the ErrorAction  parameter :

Let’s apply the same code to the Get-SQLDAtabase function :

Do you remember our first attempt at the  code?

We now can rewrite it to :

Powercool! 

Conclusions

The next step could be save it in a script module to reuse it in your PowerShell sessions. For the details of doing this, see again, The PoSh DBA: Grown-Up PowerShell Functions – Working with Modules

 We have walked through how you might create your first reusable solution using PowerShell. There is plenty more scope for refining what we’ve achieved but we have got to 100% in our aim of being able to create functions so as to allow us to reuse scripts for a variety of tasks. The more versatile we make our individual functions, the greater the scope for reuse and at the same time we can keep the maintenance overhead within bounds. This way, we don’t get swamped by a multitude of scripts all very slightly different.