PowerShell Functions for Reusability and Restartability in Azure

Comments 0

Share to social media

When working with PowerShell, there are two principals that I always code by: reusability and restartability. This goes doubly so when interacting with Microsoft Azure. There are several components, such as resource groups and storage, that are needed with almost anything you do on the Azure platform. In this article, I’ll demonstrate two PowerShell code samples that illustrate these principals.

First though, I’ll define our principals. Reuse is the ability to create a piece of code that you can use over and over. PowerShell provides a great mechanism through its native ability to create functions. You can create a core library of functions which you can call from all your scripts.

The second principal comes in the ability to easily restart your scripts. All kinds of things can cause your PowerShell scripts to crash: a bug in your code, a bad parameter, or perhaps something outside the code such as a server that goes down unexpectedly, or even a simple loss of internet connectivity. Often you may not be sure exactly where your script errored out. With these strikes against you, it’s important you have the ability to simply re-run your script and allow it to determine where it needs to pick back up.

Resource Group Creation Script for Reusability

Take a look at an example using the creation of resource groups. (NOTE: If you are new to PowerShell scripting in Azure, follow these instructions to install Azure PowerShell on your computer and then type Login-AzureRmAccount to connect.) In Azure, everything you create is tied to a resource group. Storage, databases, virtual networks, and virtual machines all need to be tied to a resource group which provides a convenient way of tying together all the resources for your solution. This makes resource group creation a great candidate for reuse.

Here’s an example of a function to create a resource group.

The script starts with a parameter block consisting of two parameters, the name of the resource group you wish to create and the geographic location where the resource group will be stored. The mandatory tags tell PowerShell these are required parameters, and a helpful message prompts the user if they call the function without passing in mandatory parameters.

Next, a simple Write-Verbose command provides helpful messages to the user executing the function should they call it using the Verbose switch. You will also note lines in the code that begin with the # (called a pound sign or hash tag). This is PowerShell’s comment character, it just means ‘ignore everything after this character, it’s just information for the developer and not an actual command’. Comments are a great way to help document your code, and you should use them frequently.

Following that is a cmdlet which will try to get a reference to the resource group you wish to create, Get-AzureRmResourceGroup. After the name parameter, you’ll see a ` mark. No, that’s not a smudge on your monitor, the ` (backtick) is the line continuation character in PowerShell. I encourage you to use them frequently as they can help improve code readability.

The next line continues the cmdlet parameter list with the ErrorAction parameter. Normally, if you attempt to get a reference to a resource group which does not exist, PowerShell stops running your script and produces a big scary error message. What you want is for PowerShell to suppress the message and keep running, hence the script uses SilentlyContinue for the value you’re passing in.

If the resource group does not exist, the Get-AzureRmResourceGroup cmdlet will return a NULL and place it in the $rgExists variable. Should the resource group already exist, the $rgExists variable will hold a reference to the existing resource group.

The script then checks the value of the $rgExists variable in the if statement. If it is null, it displays a helpful verbose message and calls the New-AzureRmResourceGroup cmdlet which will create your new resource group.

It is this logic that provides for the restartability principal. By simply checking to see if the resource group exists first, the code only calls the creation command if it is needed, thus avoiding any errors.

In addition to providing the ability to restart the script, not caring if the script crashed before or after the creation of the resource group, it also provides another benefit: the ability to run multiple scriptsin any order that include the function .

Say you have two scripts: the first creates an Azure SQL database; the second creates some Azure virtual machines. Both will exist in the same resource group. With this restartable function, you can simply include this line in both scripts:

With this logic, it doesn’t matter which of the two scripts runs first. The first to execute will create the resource group, the second will see the resource group already exists and simply skip over the attempt to create it.

Upload to Azure Storage Containers with Restartabiility

Take a look at a second example that further illustrates these concepts. One thing I do frequently when working with Azure is uploading files to Azure storage containers. Thus, I created a reusable function to handle it for me.

As before, the function starts with the parameter block. The first four parameters are mandatory and include help messages. The first, FilePathName, is simply the file name to be uploaded including the path to it. The next three are relatively obvious: the resource group, storage account, and container within the storage account where the file will go. (NOTE: If you are new to Azure storage containers, read this article to get started.) The next parameter, TimeOut, is assigned a value. That makes it an optional parameter. If the user provides a value, that value will be used, otherwise it will use the value assigned as the default.

The final parameter is labeled as a switch. A switch is a simple way to pass a Boolean value to the function. If the user includes the switch, in this example Force, in the parameter list, the $Force variable will have a value of true, otherwise it will be false. I’ll cover the use of these parameters while reviewing the rest of the function.

The first task is to get the storage account key. This is a cryptographic key that validates access to the storage account. The cmdlet that returns this is Get-AzureRmStorageAccountKey. You may think the syntax for this is a bit odd. Get-AzureRmStorageAccountKey returns an array of keys. For this purpose, you only need the first one. Wrapping the cmdlet in the dollar sign and parenthesis allows you to access the array. The Value[0] gets the first value off the array. The value is then saved into the $storageAccountKey variable.

With the key in hand, the code then gets a context token using the New-AzureStorageContext. The context is a combination of the storage account key and storage account name and is used to validate the current user to the other storage cmdlets. Be aware this doesn’t actually create anything on Azure, it simply generates a token in memory.

Now the script needs to get a reference to the local file you want to upload. The code uses Get-ChildItem to do so. This will allow you to get more information about the file, as you’ll see momentarily.

The next thing to do is set a variable, $upload, to true. This is the first step in the goal of restartability. By setting it to true upfront assumes that, yes, the file does indeed need to be uploaded. Then the code steps through a series of checks to see if uploading is necessary.

In order to do those checks, a reference to the file if it exists on Azure is needed. Here, the code uses Get-AzureStorageBlob to get a list of all files in our Azure storage container. Note that the context variable created a moment ago was passed in as a way of identifying the current user to Azure. It uses the pipeline to pipe it through Where-Object, filtering for only the file name you want to upload. To get the file name, it uses the $localfile variable and accesses its name property, which just returns the name and excludes the full path to the file.

Take a moment to contrast this with the way you handled getting a list of resource groups in the previous function. The function used the ErrorAction SilentlyContinue method. It has the benefit of letting Azure filter out the list of resource groups only returning the one you want, meaning Azure does all the heavy work. The downside is that all errors that may occur are ignored using the default assumption that, if there is an error, it must be because the resource group didn’t exist.

Using the Where-Object method, a list of all files in our container is returned and then filtered on the client (local) side. This means there is a bit more work to do locally, but it has the benefit of not ignoring any errors. As with anything, there are multiple ways to achieve your goals when programming. If you feel the risk of errors is slight, the ErrorAction method is slightly more efficient; on the other hand, it can overlook an error such as a connectivity issue. If you feel the risk of errors is high, then the Where-Object method may be the better choice for you.

Whichever method you select, if the file is found on Azure, then the variable $azureFile will contain a reference to it. The next step is to check if the variable is not null. If it isn’t, then compare the length of the file on Azure to the length of the file locally. If they are the same length, this function assumes they are the same file and there is no need to upload, setting the $upload variable to false. For this function, the check was kept simple; there are many other checks you could make, such as file dates, to further validate these are the same file.

At this point, the code has checked to see if the file is already on Azure, and, if so, what size it is. If it exists and is the same size as the local file, the function should skip the upload. This helps support the restartability goal by not uploading a file that is already present. What if, though, you want to force the file to reupload regardless of the result of the checks? That’s where the Force switch discussed in the parameters section comes into play.

The next line of code checks to see if the $Force variable is true. If so, the value of $upload is overwritten, setting it to true to force the upload. This could have also been written as if ($Force -eq $true). When working with switches, though, it can be a little more readable to abbreviate it as was done in the function.

Finally, it’s time to upload, that is, if it’s needed. The value of the $upload variable is checked, and, if it is true, the code kicks off the upload using Set-AzureStorageBlobContent. In this section, the cmdlet has been wrapped in a try/catch block. Timeouts are the biggest issue with uploading files to Azure, especially large files over slow internet connections. Thus, for this cmdlet, you should always take the extra precaution of wrapping it in a try/catch block.

In this function, if there was an error, it’s thrown, and then break is used to halt script execution. There are other options you could choose to implement, such as automatically restarting the upload. Additionally, there may be other parts of the code you may wish to wrap in try/catch such as checking to see if the container or storage account exists. These were omitted from this example for clarity. I did want to make sure to include at least one as an example.

Reusing Functions in Multiple Scripts

While PowerShell enables reuse of code via functions, how do you take it to the next level and reuse those same functions across multiple scripts? Well, it turns out to be quite simple. First, take the functions and save them in their own PowerShell script. For this example, put them in a file named MyAzureFunctions.ps1 and save it in a folder called C:\PSScripts.

Say you have a second script, just call it DoSomeAzureStuff.ps1 and save it the same folder. Inside this script, simply insert a command to run MyAzureFunctions.ps1.

The single period at the front indicates this is a command to execute the code inside another ps1 file. You must provide the path to the file, followed by the name of the script to execute, in this case MyAzureFunctions.ps1. Once you do so, the functions will be in memory and ready to use.

It is generally considered a best practice to place the execution of other ps1 files toward the top of your script. Not necessarily on the first line, but just toward the top so the functions will be available anywhere inside your code. However, this isn’t a requirement; the only requirement is that it must be executed prior to calling any of the functions inside it.

Thus, you might have something like:

The code has some comments, followed by the call to execute the script with your shared functions. Then (optionally) some other code, followed by calls to your functions.

There is a shortcut you can make in regard to the path. If the script with the shared functions, MyAzureFunctions.ps1, is in the same folder as the script you are calling it from, DoSomeAzureStuff.ps1, and you are executing your script from that folder, you can shorten the path to use .\, as in this example:

As before, the first period indicates you wish to execute a script. Next, the combination of a period with a backslash is a shortcut for ‘the current directory’. This can be a great way to share your scripts with others, without having to hard code the path. Your fellow PowerShell coders can place both files in any folder on their computer and the scripts will work without modification.

PowerShell has further facilities for code reuse in the form of modules. Modules, though, are a bit of an advanced concept, and beyond the scope of this article.

Conclusion

As these two functions illustrate, it is not difficult to write your scripts keeping both reusability and restartability in mind. Creating functions to include these features may take more time up front, but they will save you much more time as you use them again and again over months or years. While Azure is used for these examples, the same techniques could be used when writing PowerShell code with other targets such as SQL Server or virtual machines.

Load comments

About the author

Robert Cain

See Profile

Robert C. Cain (http://arcanecode.me/) is a Microsoft MVP, MCTS Certified in BI, and is the owner of Arcane Training and Consulting LLC. He is also a course author for Pluralsight, Simple Talk author, and co-author of 5 books. A popular speaker, Robert has presented at events such as the SQL PASS Summit, IT/Dev Connections, TechEd, CodeStock, and numerous SQL Saturdays. Robert has over 25 years’ experience in the IT industry, working in a variety of fields including manufacturing, insurance, telecommunications and nuclear power.