Practical PowerShell Unit-Testing: Checking program flow

Pester offers a relatively small number of commands to Unit-test PowerShell scripts, but these commands have tremendous capabilities. Pester even gives you the means to validate data and test program flow. It uses 'mocks' to provide hooks to validate program flow, so you can be more confident that a function is doing things they way you intended.

Contents

Part one of this series explained how to install and use Pester, and the basic structure of unit tests in PowerShell. Part two covered mocking and parameterized test cases. This final installment focuses on validation, both with data and with program flow.

Validating in a Test

I have touched on actual validation steps only implicitly in some of the examples thus far. This section describes the important Should command (see the Pester reference page at https://github.com/pester/Pester/wiki/Should).  The canonical verification assertion is:

Should offers a rich selection of operators, as detailed in the following two tables.

Element

Case Insensitive

Case Sensitive

Test that objects are the same

Be

BeExactly

Test that objects match by regex

Match

MatchExactly

Test that a file contains an item

Contain

ContainExactly

 

Element

Operator

 

Test for null or empty string

BeNullOrEmpty

 

Test for object existence (akin to Test-Path)

Exist

 

Test whether an exception was thrown

Throw

 

Negate any of the above operators

Not

 

Note that earlier Pester versions-and earlier articles written about it-showed this syntax, which is no longer valid:

2084-pencil.jpg

Use the Should command to validate scalar data.

Call History

Besides validating that you get back an expected value, the next most common thing to validate in a unit test is that a function g is called as a result of your calling a function f. Just as with standard .NET mocking framework moq (or others of its ilk), you can verify whether a function or cmdlet was called in PowerShell by using Pester to mock it, run your action, then check whether the mock was called. Here is the syntax for asserting that a mock was called:

The simplest case is checking whether a mock was called or not, regardless of the arguments passed to the function, and regardless of how many times it was called, use this simple form:

Contrariwise, if you wish to assert that a mock was never called, use the -Times parameter, specifying that it should be called zero times, hence not at all:

It is important to note that the count you supply to -Times indicates that the mock must be called at least that many times.  Thus, to assert that a mock was called but only once, add the -Exactly parameter:

Besides examining how often a mock was called, you can also check how a mock was called, i.e. with what parameters. Say, for example, you wanted to confirm that Select-String was invoked with the SimpleMatch switch parameter:

When validating that functions or cmdlets were called, it is important to consider scope. Say, for example, that you have multiple test (It) blocks inside a Context block, as shown. You also declare a mock inside the Context scope but outside of either It scope:

You could (correctly) surmise that checking call history with Assert-MockCalled would fail in one or both tests because they would both be adding to the same mock’s history.

But now consider moving the mock inside the test scope (the It block), as shown below.  Notice that each It block specifies the same mock.

You would think the tests would now be completely independent-and correctly validate the mock calls-but not so!  The mocks are still scoped to the Context block!  See my forum post on this important point. It turns out that, as designed, Assert-MockCalled is scoped to its containing Context block (or if no Context block is present, then its current Describe block), which is why the above tests will not work correctly. I could see that that behavior is useful for some tests, but for others (like the example above) it would really be useful to be able to scope a mock to an It block. As Dave Wyatt responded in my post referenced above, the new 3.0 release now provides just this flexibility: the -Scope parameter. Thus, we need only change each Assert-MockCalled above to specify scoping to the It block…

… for the above two tests in the same Context block to correctly validate calls on the mock. I would recommend explicitly using the Scope parameter (which may be set to It, Context, or Describe) on all Assert-MockCalled instances for clarity.

Pester 3.0 note: Version 3.0 is brand new as I write so I have only experimented with it for a short time. According to the documentation, a defined mock is still supposed to default to the closest Context or Describe block, but my cursory test seems to show that a mock actually defaults to the block in which it is defined, be it It, Context, or Describe. I found this out because my tests that had failed with 2.0 started working with 3.0 before I even added a -Scope It.

There is one other command available for verifying call history: Assert-VerifiableMocks. With this command, you flag each mock you wish to verify when you create the mocks. Then you invoke Assert-VerifiableMocks at the end of your test to validate all the mocks you have flagged in one go. See the official Pester documentation for more.

2084-pencil.jpg

Use Assert-MockCalled or Assert-VerifiableMocks to validate program flow.

Validating Array Data

Earlier you saw how to validate results with the Should command. Unfortunately, Should does not always work for arrays. Consider the following tests:

Scenario 1 is expecting three values but the actual comes back with only one of those, plus one that was not expected. Within the Context block for scenario 1, I have one test using the native Should Be and a second using a helper function I created for validating arrays. Here is the output for the two tests in scenario 1:

The native Pester approach does report some part of the problem but not the whole problem. Using the ArrayDifferences function, on the other hand, clearly spells out what was unexpected in the actual result, both missing and surplus items. Scenario 2 highlights an even more serious flaw:

Here, the native Pester test reports a passing result when it should be a failing result, as the ArrayDifferences version identifies.

ArrayHelper actually supplies two different functions for validating arrays, shown below. Both are, in fact, testing the same thing-that the two arrays ($actualResult and $expectedResult) are equal:

These two statements show the two helper functions. Both are, in fact, testing the same thing-that the two arrays ($actualResult and $expectedResult) are equal:

Both array helper functions compare arrays in an order-independent fashion. So if the actual and expected differ only by order, as shown here, then both of these test statements will pass:

Where these two functions differ is when a failure occurs, i.e. when the two arrays do not contain the same set of members. For this next example, assume we ended up with actual and expected arrays as follows:

Obviously these arrays differ so attempting to validate that they are equal will report failure with both help functions. Here is what you would get:

Clearly, ArrayDifferences gives better quality output upon failure.  When the Should BeNullOrEmpty predicate receives something other than null or “”, it reports Expected value to be empty but it was something. ArrayDifferences populates something with an empty string if the two arrays contain the same members. Otherwise, it enumerates the differences by reporting which elements appeared in the actual list that were not in the expected list, and vice versa. Thus, in the case of failure the test output directly reveals all the differences, obviating the need to fire up the debugger to uncover those details.

AreArraysEqual, on the other hand, is quite bad from that perspective. It merely returns one bit of information: whether the two arrays contain the same members. But AreArraysEqual has another use; you will see it used liberally in one of the sample files like this:

You can read this statement as: confirm that ReportMissing was called at least once where its $items parameter contained the values in our expected list, $missingList.

Pester uses script block parameters frequently, you may have noticed, making it quite flexible in what you can do. In the Call History section earlier, you saw a more conventional value supplied to ParameterFilter, specifying that a parameter had a given value ({ $SimpleMatch -eq $true }). With arrays, though, you cannot specify a simple Boolean predicate to test an array for equality to some other array. But looking at the ParameterFilter from a broader perspective, it is just a script block that evaluates to a Boolean value. So you can use any arbitrary logic you like, so long as it returns a Boolean. In the current example, we use AreArraysEqual for this purpose.

Finally, let us return to the example from part 2, wherein we tested the Get-TextFileNames function, and correct a hidden problem lurking in the shadows. Here is the latest version we used:

With fresh insight you can now quickly surmise that Should Be on the highlighted line is trying to validate arrays. All four tests were passing, but now you know you cannot necessarily trust that result! To make this more robust we need two changes. First, change the validation line to this:

ArrayDifferences passes its two arguments to the built-in Compare-Object cmdlet, whose arguments must be arrays-a null value does not qualify! Thus, instead of (Get-TextFileNames) we use  @(Get-TextFileNames) to force an empty result to be an empty array rather than null.

The second required change addresses the same point on the other argument, $expectedResult. Notice the highlighted $null in the fourth test: change that to an empty array,  @( ), to complete the changes to make this a robust set of unit tests. 

2084-pencil.jpg

You cannot use Should to validate array data.

Project Files

This series has described a variety of tweaks and techniques to streamline your work with Pester. Attached to this article is a file archive that contains the helper functions discussed herein, along with some sample source and test files so you can see how I have used Pester for some real-world code. Within the archive are two main folders: one contains code that needs to be installed within the Pester installation itself (ContextHelper.ps1) and the other contains a simple but real-world PowerShell module (PesterArticleSample) that contains the module source code (GetSafeProperty.ps1 and ProjectFileConsistency.ps1), the module infrastructure (PesterArticleSample.psm1 and PesterArticleSample.psd1),  helper functions (TestHelpers folder) and unit tests for all the code. You might want to review the preamble to each of the test files (*.Tests.ps1) to see a couple different use cases for loading corresponding source. Here’s the content of the file archive, followed by details on the “helper” files.

2084-img10.jpg

ContextHelper.ps1

This file extends Pester itself by providing the ContextUsing command as described in Parameterized Test Cases in part two of this series. (Note that native support for parameterized tests in Pester is imminent, which will provide a slightly different syntax for accomplishing the same thing. Keep an eye on the Pester releases for this in the very near future.)

  1. Copy this file into the Functions subdirectory of your Pester installation.
  2. Edit the Pester manifest (Pester.psd1 in the root of your Pester installation) to add ContextUsing to the list of function names in FunctionsToExport.

Then go ahead and load the Pester module. (If you have already loaded it, you will need to reload it with the -Force parameter, i.e. Import-Module -Force Pester to have it recognize the update.)=”mono”>

ArrayHelper.ps1

This file provides the ArrayDifferences and AreArraysEqual functions discussed in Validating Array Data in part three of this series. It is included in the tests folder for the module. For your own projects, copy the file to a location of your choice. I typically put its containing folder (TestHelpers) in my tests directory to keep it separate from the tests yet near at hand so it is easy to reference, as described in Making your Test File Aware of Helper Files in part one.

Running the Module Tests

Go to the directory containing the sample module (or some ancestor) and run Pester with these two commands:

Here is the full test run. If you compare this to the test files themselves, you can easily discern the Describe commands and Context commands. The individual It commands are not noted explicitly in the output, but those are all the lines in green. Notice there is no visual demarcation of file boundaries; you only have a list of the functions at the top level (presuming you use the Pester convention of specifying function names as the description in the Describe commands).

Conclusion

Though Pester is a rather small package and offers a relatively small number of commands, it has tremendous capabilities for unit testing PowerShell code. Now that you have the tips and techniques provided in this series of articles added to your arsenal, go out there and conquer your code!