Simple Talk is now part of the Redgate Community hub - find out why

The Poster of the Plethora of PowerShell Pitfalls

One of the downsides of learning a new computer language is that transfer of training doesn't always work to your advantage. In fact, the habits you picked up in the past may now cause confusion. In this poster or wall-chart for long walls, Michael Sorens selects the thirty-six most common causes of confusion for anyone getting to grips with PowerShell. Forewarned is forearmed.

Any programming language has elements of syntax or semantics that can lead to confusion, misunderstanding, or misdirection; and that leads to extra hours of head-scratching when trying to debug the inevitable issues that will crop up as a result. PowerShell is not immune to such issues, of course. Given that it is a relatively young language, though, its particular set of problem areas is perhaps less well known. Web searching can help you track down an issue, but sometimes it is not at all obvious what search term to use.

This wallchart brings together most of the common pitfalls you are likely to encounter when tackling PowerShell. Most items provide a reference to get more information. You will find several major references oft quoted, along with a handful of individual blog posts mentioned here and there. Thanks to all those who have blogged about various PowerShell issues, and particular thanks to Roman Kuzmin for his compendium of PowerShell Traps and the folks at who compiled the Big Book of PowerShell Gotchas!


Selecting a string from an object is not a string


Here we grab the C:\temp directory and attempt to collect its name, but instead of showing “name is temp” you get this:

PS> $myName = Get-ChildItem C:\ -Filter temp | Select-Object –Property name
PS> "name is $myName"
name is @{Name=temp}

And you can confirm that you do not have a string:

PS> $myName.GetType().FullName


PowerShell cmdlets return objects most of the time, rather than simple types (integers or strings). Select-Object is often used to do final selection, but it is designed to return objects-even when you ask for a single property. You have to tell it to just return the value of the property:

PS> $myName = Get-ChildItem C:\ -Filter temp | Select-Object –ExpandProperty name
PS> "name is $myName"
name is temp

Some cmdlets provide this as a built-in convenience so the above could be shortened:

PS> $myName = Get-ChildItem C:\ -Filter temp -Name

Reference: PowerShell Gotchas, Properties vs. Values

Interpolating object properties within a string does not work


This example shows that instead of the value of the DisplayName property of the $svc variable you get the type of the $svc variable and the property name “DisplayName” (I have truncated the output here for brevity):

PS> $svc = Get-Service -Name winrm
PS> "svc is $svc.DisplayName"
svc is ServiceController.DisplayName


PowerShell interpolates from a dollar sign ($) until the first character that is not a valid variable name character; in this case, a period. So, everything after the period is just text and is copied verbatim. The variable $svc is an object, and interpolating it within a string yields its type name rather than its contents. But PowerShell can also interpolate expressions within a string-using the form $(expression)-and that is the fix here:

PS> $svc = Get-Service -Name winrm
PS> “svc is $($svc.DisplayName)”
svc is Windows Remote Management

Reference: Jeffrey Snover’s post Variable expansion in strings and here-strings

Parameters are passed incorrectly to a function


Calling a function with commas between arguments and/or parentheses surrounding arguments fails, as with any of these attempts:

PS> func a,b,c
PS> func(a,b,c)
PS> func(a b c)


Use only spaces between arguments-no parentheses and no commas:

PS> func a b c

Reference: PowerShell Pitfalls, Part 1

Comparisons are not always commutative


These two expressions are not the same:

PS> $false -eq ''
PS> '' -eq $false

Reason: the right-hand operand is coerced to the type of the left-hand operand when comparing dissimilar types. So $false -eq '' is $true because the empty string is false when coerced to a Boolean, but '' -eq $false is $false because $false coerced to a string is “false”, which is obviously not an empty string.


Order your operands in a comparison with care so that you are actually testing what you intend to.

Reference: PowerShell Pitfalls, Part 1

Cmdlets return inconsistent results


Yes, a cmdlet’s return type varies depending on whether it returns none, one, or more than one value from a function. That is inherent in the way cmdlets work-a cmdlet returns some number of objects nominally to be piped to something else. If it turns out it sent multiple objects, the type of the bunch taken together is an object array. But if it only returned one, it is just that object type.


If a cmdlet potentially returns a stream of objects (an array), force it to always be an array for ease of handling, e.g.

PS> $result = @(Some-Cmdlet x y z)

Also you have to force it on the receiving end.This…

PS> function foo() { return @( 5 ) }
PS> $result = foo

 … does not return a list with one integer; it simply returns one integer. Instead do this:

PS> function foo() { return 5 }
PS> $result = @( foo )

Reference: PowerShell Pitfalls, Part 1

Unable to pass arguments to external commands


PowerShell interferes with some arguments being passed to an external command, e.g.

PS> echoargs c:\tmp;c:\other

Certain characters are special to PowerShell-here the semicolon is causing a problem.


Use quotes to suppress parsing of an argument, e.g.

PS> echoargs 'c:\tmp;c:\other'

 or use the verbatim parameter (--%) to suppress all further processing on an line, e.g.

PS> echoargs --% c:\tmp;c:\other

Piping a hash table does not work


Piping a hash table sends it as a single object rather than as a stream of Key/Value pairs. In this example, attempting to select the first key/value pair actually returns the entire hash:

PS> $hash = @{"a"=1;"b"=2;"c"='xyz' }
PS> $hash | Select -first 1


Use the hash table’s GetEnumerator() method to convert it to a pipeable collection.

PS> $hash.GetEnumerator() | Select -First 1

Reference: PowerShell Pitfalls, Part 2

After filtering a hash table you no longer have a hash table


Building upon the last issue, you can filter a hash table’s entries like this:

PS> $intHash = $hash.GetEnumerator() | where Value -is [int]

But you no longer have a hash table!


To filter a hash table and retain its type, use its GetEnumerator() to feed to the pipeline, then reconvert to a hash table (with e.g. ConvertTo-HashTable provided in the reference).

PS> $intHash = $hash.GetEnumerator() | where Value -is [int] | ConvertTo-HashTable

Reference: PowerShell Pitfalls, Part 2

Functions return too much


A function returns all uncaptured output not just what you pass to a return statement. This includes your own output from Write-Output calls as well as outputs from cmdlets you call.

So this function to square a number will produce unexpected results:

function square($x) {
    Write-Output "Squaring $x..."
    return $x * $x


When writing a function to return a value use Write-Verbose or Write-Warning-rather than Write-Output-for console output. Also, ensure that you capture any output from each cmdlet you call.

Reference: PowerShell Pitfalls, Part 1

Arithmetic operators produce unexpected results with arrays (precedence issue)


Simple arithmetic operations with an array seem wrong. For example, 1,2 * 3 does not return a 2 element list containing 1 and 6. It is really (1,2) * 3 so it returns a 6-element list: 1,2,1,2,1,2.


Comma has higher precedence than arithmetic operators! Use parentheses to override operator precedence.

PS> 1, (2 * 3)

Reference: PowerShell Pitfalls, Part 1

Arithmetic operators produce unexpected results with arrays (operation issue)


Applying arithmetic operators to a list operates on the list as a whole, not on individual elements of the list so, e.g. (1,2) * 3 turns a 2-element list into a 6-element list, rather than multiplying each element by 3.


To map an arithmetic (or other) operation to each element of a list do it with a loop:

PS> 1, 2 | ForEach { $_ * 3 }

Reference: PowerShell Pitfalls, Part 1

Zero is not always zero when using enums


Here we use the known enum ConsoleColor whose Black element happens to map to 0. The equality comparison is commutative: enum coerced to int is 0; int coerced to enum is Black.

PS> $object = [ConsoleColor]::Black
PS> if (0 -eq $object) { 'Yes!' }
PS> if ($object -eq 0) { 'Yes!' }

But a conditional that evaluates to $false-like 0-should be skipped!

PS> if (0) { 'this will not appear' }
PS> if ($object) { 'this appears!' }


Be wary when testing the truth-value of an object (as opposed to a Boolean predicate). Zero is $false, empty string is $false, and an empty array is $false, but any other object-even when effectively empty-is $true.

PS> $object = [ConsoleColor]::Black
PS> if ($object) { 'yes!' }
PS> $object = @{} # empty hash table
PS> if ($object) { 'yes!' }
PS> $object = [PSCustomObject] {} # empty custom object
PS> if ($object) { 'yes!' }

Reference: PowerShell Traps, Enums-are-always-evaluated-to-true

Equality operators behave strangely with a list


This both looks like the object is equal and unequal to 1 at the same time!

PS> $obj = 1, $null, 2, 'abc', 3
PS> if ($obj -eq 1) { 'yes!' }
PS> if ($obj -ne 1) { 'yes!' }

This is particularly pernicious when looking for $null:

PS> if ($obj -eq $null) { 'got here!' }
got here!


In reality, the equality operators act as filters, a rather obscure feature; there is one item that is ‘1’ and 4 items that are not ‘1’:

PS> $obj = 1, $null, 2, 'abc', 3
PS> $eqResult = $obj -eq 1
PS> $neResult = $obj -ne 1
PS> $eqResult.Count
PS> $neResult.Count

This is again a variation on the type coercion between dissimilarly typed operands, as evinced by reversing them. Here you get intuitive results: the list is not equal to the integer 1:

PS> if (1 -eq $obj) { 'yes!' } else {'no'}
PS> if (1 -ne $obj) { 'yes!' } else {'no'}

Reference: PowerShell Traps, looks-like-object-is-null

An argument to a switch parameter is ignored


Typically you pass a parameter name a value of value with -name value. But with a switch parameter that does not work. Consider this function:

function f([switch]$negate)
{ if ($negate) { -42 } else { 42 } }

PowerShell ignores the seeming assignment of $negate to $false:

PS> f -negate $false


Switch parameters do not take arguments, so you must set the switch with any of these variations:

PS> My-Cmdlet –Force
PS> My-Cmdlet -Force:$false
PS> My-Cmdlet -Force:$true

rather than, e.g.

PS> My-Cmdlet -Force $false

Reference: PowerShell Pitfalls, Part 1

An empty string passed to an external application disappears


This command, for example, passes 2 arguments, not 3, to the PowerShell Community Extensions utility program echoargs:

PS> echoargs "a" "" "b"


Use the verbatim parameter (--%) or use an escaped quoted value (`"x`") or (`"$var`"), e.g.

PS> echoargs --% "a" "" "b"

Reference: PowerShell Pitfalls, Part 2

Omitting a $ on a variable reference might produce unexpected results rather than just an error


In this example, “a” without a preceding dollar-sign is happily accepted as just a string constant so there is no error and you get a different result.

PS> $a = "b"
PS> $list = "a","b","c"
PS> $list | Select-String a
PS> $list | Select-String $a


None really; use code reviews and unit tests

Reference: PowerShell Pitfalls, Part 2

Re-importing a changed module does not work


After modifying code in a module, just doing Import-Module x again does not find your code changes if you have previously loaded the module in your current session.


Import-Module silently ignores your requests to reload unless you force it to: use Import-Module -Force to load your modified code. Alternately, you could unload the module then import again if you prefer.

Reference: PowerShell Pitfalls, Part 2

Piping Format-Table into ConvertTo-xyz fails


Say you created some useful output…

PS> ps | Select -first 10 | Format-Table

And you want to pipe that nicely formatted result into ConvertTo-Html or  or ConvertTo-Csv (or Export-Csv), etc. Space precludes sample output, but try this and you will see that the output is quite different from above:

PS> ps | Select -first 10 | Format-Table | ConvertTo-Csv


Avoid using Format-Table except at the end of a pipe. Format-Table outputs formatting codes for display, which makes its output unsuitable for piping to other cmdlets in general. (Though you can safely pipe Format-Table to a few select cmdlets like Out-File and Out-String.)

Reference: PowerShell Gotchas, Format Right

Mixing Format-Table output amongst other output creates unexpected results


Consider this series of four commands (aliases used for brevity) output to the console:

PS> gps idle; gps system; @{a=0}; gsv winrm; gps idle

PowerShell begins outputting objects from Get-Process (alias gps) in table format, then encounters two commands that generate other kinds of output so just outputs those in list format, and resumes outputting process objects with the last command.

Now, just add a Format-Table (alias ft) in the middle. (This is quite different from the prior ft issue; this ft is correctly at the end of a pipe.)

PS> gps idle; gps system; @{a=0} | ft; gsv winrm; gps idle

The output is different not just for the third command that we changed, but for the remaining commands as well!

(With thanks to “Mike Z” in this StackOverflow post.)


Fomat-Table (alias ft) apparently resets the object type that PowerShell thinks is current. So the Get-Service (alias gsv) sets a new object type as the current one, and it is output as a table. Now, when it runs gps again it is not the current object type so that is now output in list format.

The workaround is simple: Format-Table should only ever be used as a terminal command-do not sandwich it in the middle of a sequence of like objects.

Programs prompting for input hang in ISE


A C# program (or other external application) asking for input runs fine in a “plain” PowerShell window but hangs in PowerShell ISE.


Likely the C# code implementing the PS cmdlet uses Console.ReadLine() which is not supported in PowerShell ISE. Revise the program to use Host.UI.ReadLine() instead, which will work in either PowerShell ISE or the regular PowerShell window.

The pipeline is not pipelining


The PowerShell pipeline is supposed to (more or less) pump each object down the pipeline from one function to the next as soon as it is ready. Some homespun functions do not send data along the pipeline until it processes all its inputs. For example, this code looks fine in isolation but if you pipe it to something else, its data shows up all at once at the end:

function f ([ string []] $data )
  $out = @()
  foreach ( $s in $data) {
    start-sleep 2; $out += $s }
  Write-Output $out


You need to adapt your style for writing pipelineable code: rather than accumulate output internally, just output it as it goes. In other words, let PowerShell accumulate it for you.

function f ([ string []] $data )
  foreach ( $s in $data) {
    start-sleep 2; Write-Output $s }

Reference: PowerShell Gotchas, Accumulating Output in a Function

-contains seems broken; it only works on whole strings


With notepad running, this finds it with a whole name:

PS> Get-Process | Where Name -contains 'notepad'

But attempting to use a substring like this fails:

PS> Get-Process | Where Name -contains 'note'


The -contains operator does not do operations against a string; it only operates on a list. That is, it answers the question “does this list contain a given element?” To check for a whole or partial string match use -like or -match.

PS> Get-Process | Where Name -match 'note'
PS> Get-Process | Where Name -like '*note*'

Reference: PowerShell Gotchas, -Contains isn’t -Like

PowerShell does not find known properties


Here you select some properties about services and want to display just the running services, but it either returns nothing (strict mode off) or causes an error (strict mode on).

PS> Get-Service | Select Name, DisplayName | Where Status -eq Running


The order of pipeline statements matters. After the Select, only two properties exist-there is no Status property anymore! Do the Where before the Select:

PS> Get-Service |
Where Status -eq Running | Select Name,DisplayName

Reference: PowerShell Gotchas, You Can’t Have What You Don’t Have

-Filter use is inconsistent across different cmdlets


Consider these examples, showing that -filter needs a glob-pattern, a Boolean predicate with ‘=’, and a Boolean predicate wrapped in a block using ‘-eq’, respectively:

PS> Get-ChildItem -Filter *.html
PS> Get-WmiObject -Class Win32_LogicalDisk
-Filter "DriveType=3"
PS> Get-ADUser –Filter { title -eq 'CTO' }


That’s not PowerShell’s fault; it just passes along the parameter value to the underlying provider (the file system, WMI, or ActiveDirectory). And those technologies have long ago created their varying syntaxes for a filter. Sadly, the workaround is to read the documentation to see what you need.

Reference: PowerShell Gotchas, -Filter Values Diversity

Code split across multiple lines does not always work


If you copy/paste this-don’t type it in!-it will fail:

PS> Get-Service -name win* ` 
    -exclude winrm `

All whitespace is not created equal in PowerShell. Line breaks can be added with impunity after certain characters ( | , . :: ; ( [ { = ). But you can forcibly add a line break anywhere else by preceding with a back tick as above. The problem is that if there is a space after any of those backticks (like there is in the first line), the code breaks!


Use different patterns to avoid backticks whenever possible. Here, create a hash table of the cmdlet arguments (note that the switch parameter accepts either a $true or $false value) where you can add line breaks between expressions without backticks:

PS> $params = @{
    name = 'win*'
    exclude = 'winrm'
    DependentServices = $false

then splat that hash table into arguments for the cmdlet, all with no backticks:

PS> Get-Service @params

Reference: PowerShell Gotchas, Backtick, Grave Accent, Escape

Reference: Scripting Guy,

Objects lose their types when you modify them


In PowerShell it is easy to create a strongly-typed array:

PS> $array = [Int[]](1,2,3,4,5)
PS> $array.GetType().FullName

But, like Heisenberg’s principle states, touching that data causes the type to change! Here we just add a new element to the array and now they are all vanilla Objects:

PS> $array += 6
PS> $array.GetType().FullName


The fix for those, though unexpected, is to strongly type the variable directly rather than the data assigned to it, and it will retain its typing:

PS> [Int[]]$array = 1,2,3,4,5
PS> $array.GetType().FullName
PS> $array += 6
PS> $array.GetType().FullName

Reference: Strongly Typed Arrays

Variables in a script are inaccessible in the shell


Say, for example you have trial.ps1 containing just this:

$someVariable = 123.45
"value is $someVariable"

Run the script then try to display the variable again, but either it does not exist (strict mode off) or it will cause an error (strict mode on):

PS> C:\temp\trial.ps1
value is 123.45
PS> “now value is $someVariable”
now value is


By just invoking a script you are running everything in a subshell, a child scope to your current scope. Once the script finishes that child scope goes away; it does not affect the current scope at all. To invoke a script in the current scope, you must dot-source it, which is simply a dot followed by a space and the name of your source file.

PS> . C:\temp\trial.ps1
PS> “now value is $someVariable”
value is 123.45
now value is 123.45

Serializing/deserializing PS objects to XML loses some data types


Create some data, export it, then re-import it:

$t1 = @{ myData = 11, 22, 33 }
$t1 | Export-Clixml temp.clixml
$t2 = Import-Clixml temp.clixml

Now examine the data types of the myData property:

                    $t1.myData.GetType().Name # Object[]
$t2.myData.GetType().Name # ArrayList

A second example:

                   $t1 = [ordered]@{ p1=11;p2=22;p3=33 }
$t1 | Export-Clixml temp.clixml
$t2 = Import-Clixml temp.clixml
$t1.GetType().Name #OrderedDictionary
$t2.GetType().Name #Hashtable

Certain types are preserved, i.e. those integers above remain real integers. Likewise with other simple types, but not with complex ones.

For example add another property inside $t1:

$t1 = @{ 
    myData = 11, 22, 33
    svcs = Get-Service

And the type of the elements within svcs changes from ServiceController to PSObject.


No simple fix. If datatype matters, you will need to explicitly correct the objects in your code.

Reference: PowerShell Traps, Array-becomes-ArrayList

Reference: PowerShell Traps, OrderedDictionary-becomes-Hashtable

Imported CSV data are always strings


This is easy to forget, because using data within PowerShell you get used to strong-typing. This will sort by the number of seconds of cpu time as a decimal value:

PS> Get-Process | Select ProcessName,CPU | Sort CPU

But if you read the values from a CSV file, for example, you only have strings. This just sorts lexicographically:

PS> Get-Process | Select ProcessName,CPU | Export-Csv data.csv
PS> Import-Csv data.csv | Sort CPU


To import data with strong typing, generate a series of PSCustomObjects with explicit type casting on appropriate elements:

PS> Get-Process | Select ProcessName,CPU |
Export-Csv data.csv
PS> Import-Csv data.csv | 
ForEach {
    [PSCustomObject] @{
        Time = [double]$_.CPU;
        Name = $_.ProcessName;
    }} |
Sort Time

Switch statements do strange things


Like many languages, switch is an alternate to a series of multiple if statements. But in PowerShell, switch is also a looping construct; it can operate on a collection.


* break and continue inside a switch block work for the switch, not any containing loop, as is more common with other languages.

* a switch inside a script block introduces its own context for $_, hiding any such value from an outer block.

* break works like you would expect when using switch on a single value, but when used on a collection it stops the whole loop.

Space prohibits detailed examples of all these; see the reference.

Reference: PowerShell Traps, Switch-is-a-looping-construct

PowerShell does not always return a Count or Length property with strict mode


As a convenience, PowerShell V3 introduced the Count or Length pseudo-property for scalar objects the way it has always been available for arrays. This lets you check the cardinality of a returned collection for 1 as easily as you could for more than 1 without requiring extra code. This pseudo-property works fine with strict mode disabled, but when enabled it causes an error:

                   PS> Set-StrictMode -Version Latest
PS> $array = 1..5; $array.Count
PS> $item = 25; $item.Count
The property 'Count' cannot be found on this object.


PowerShell tries to prevent scripting errors by allowing you to query the Count or Length property even on a single object. And this works fine if strict mode is disabled. But, alas, it causes an error with strict mode enabled. You must still apply extra code for special casing under such circumstances:

PS> $item = 25
PS> $returnCount = if ($item -is [array]) { $item.Count } elseif ($item) { 1 } else { 0 }

Reference: about_Properties (Properties of Scalar Objects and Collections)

Microsoft Connect

Invoking PowerShell.exe without specifying a version may invoke a newer PowerShell than that of the current host


Assuming you are in a version 3 or greater PowerShell, switch to version 2 PowerShell as a starting point:

PS> powershell -version 2 -noprofile

Now invoke powershell.exe without a version and just print its version:

PS> powershell -noprofile '$PSVersionTable.PSVersion.ToString()'

That will print 3.0 (or whatever your latest version is) rather than 2.0.


If you are in a given version and want a subshell of the same version, specify that version explicitly when invoking powershell.exe:

PS> powershell -version 2 -noprofile '$PSVersionTable.PSVersion.ToString()'

Reference: PowerShell Traps, Unwanted-version

Invoking PowerShell.exe will launch the wrong version if the version parameter is not first


Nothing prevents attempting to launch a subshell with a specific version of PowerShell using either of these:

PS> PowerShell -Version 2 -NoProfile 
PS> PowerShell -NoProfile -Version 2 

However, they produce different results!

Assuming you are in a version 3 shell or later, the first command will correctly run a version 2 shell, while the second will ignore that parameter, running the same version you started in.


Normally named parameters are order-independent but for some reason that is not the case here. The workaround is straightforward: make sure the -Version parameter is first.

Reference: PowerShell Traps, Version-must-be-first

Relative script references in a script often fail


Dot-sourcing a file inside a script using a relative path fetches the file relative to your current directory-not the directory where the script resides. For example, put this into script1.ps1 in directory A:

. .\script2.ps1

And put this into script2.ps1 in the same directory:

function Get-MyMessage() { 'hello' }

The first script will run fine only if you are in the same directory when you run it. Set your location to a different location, directory B, and attempt to run the first script.

PS> A\script1.ps1

It should fail saying it could find neither script2.ps1 nor Get-MyMessage.


To make a script use a path relative to itself rather than your current directory, use $PSScriptRoot. The script1.ps1 file should thus contain:

. $PSScriptRoot\script2.ps1

Then the script should run successfully.

Dot notation for XML seems flaky


PowerShell’s dynamic object support for XML is powerful but can sometimes lie to you. Consider this XML:

PS> $xml = [xml]'<Root><Child /><Child /></Root>'
PS> @($xml.SelectNodes("/Root/Child")) .Count
PS> @($xml.Root.Child).Count

Using both an XPath accessor and a dynamic object accessor (popularly called “dot notation”) you get back an accurate count of <Child> elements. But not so here:

PS> $xml = [xml]'<Root></Root>'
PS> @($xml.SelectNodes("/Root/Child")). Count
PS> @($xml.Root.Child).Count

XPath says correctly that there are no such children; dot notation says there is one child! (That assumes you have strict mode disabled; with strict mode enabled the last line will instead generate an error. (With thanks to Nick in this StackOverflow post.)


The issue here has nothing to do with XML per se. Rather, it is a peculiarity of PowerShell’s dynamic object support, whatever the source: you get a $null whenever a property in the chain is missing. Here we get $xml.Root.Child being $null, but you would also get $null for, e.g. $xml.Root.Child.Other.Nonexistent.Node. That sounds eminently reasonable and, in fact, was a specific design decision (according to Bruce Payette in a response to this post). Bruce goes on to suggest that in scripts you should always enable strict mode in which case you would get an error, rather than a $null.

But with strict mode disabled, you do have a $null, so let’s see what happens to it. Before applying the Count property, we wrap it with @() to force the results to an array. That way, presumably whether you got back none, one, or many elements, it would always be an array (as described in an earlier pitfall). But here you are not getting “none”-you are getting a $null. And @($null) is in fact, an array of one element, that element being a $null.

The workaround is to turn $null into none when appropriate:

@($xml.Root.Child | Where { $_ }).Count

$null is a $null is a $null… not?


Consider these functions foo and bar which both apparently return $null:

PS> function foo() { $null }
PS> function bar() { }
PS> (foo) -eq $null
PS> (bar) -eq $null

But now compare what happens when you feed each of those into a pipe:

PS> foo | %{ "this prints" }
this prints
PS> bar | %{ "this does not print!" }

(With thanks to”alex2k8″ in this StackOverflow post.)


This is another manifestation of the $null vs none conundrum mentioned above. The function foo() returned something, a single object, that happens to be $null. You can certainly feed any single object into a pipe, even a null. The bar() function, on the other hand returned nothing, not even a $null, so there is nothing to feed into the pipeline!

There is no specific workaround here, only to be cognizant that when you expect a return value from a function, make sure that function returns something-your functions will not be as blatantly obvious as the bar() function here!

PowerShell intermittently fails to enforce strong typing


You can specify types for function parameters to demand that PowerShell check those parameters:

PS> function f([int]$a, [bool]$b, [string]$c) { "$a, $b, <$c>" }

Then f -a 25 will work fine while f -a xyz will report “Cannot convert value ‘xyz’ to type ‘System.Int32’.” Great; it does strong type checking! Well, it would more accurately be called strong-unless-a-suitable-conversion-exists type checking. So f -b xyz is rejected as not a Boolean, but f -b 1 succeeds, converting 1 to $true. (Ironically f -b true is rejected-that’s just text like xyz-you must use f -b $true to pass a Boolean.) Worse yet,  f -a 5.42 succeeds, but $a is just 5 after being converted to an integer! And any object has a ToString() method so anything can be passed into parameter $c, e.g. f -c (get-process svchost) runs without complaint but gives ugly output. (With thanks to Richard Berg in this StackOverflow post.)


To avoid unexpected arguments being auto-converted, you have to do your type-checking manually. You might get by with something like this which omits the type specifier on each parameter completely and does type checking in the body:

function f($a, $b, $c) {
    if ($a -and $a -isnot [int]) { throw 'not an int' }
    if ($b -and $b -isnot [bool]) { throw 'not a bool' }
    if ($c -and $c -isnot [string]) { throw 'not a string' }
    "$a, $b, <$c>"

Then f -b 1, for example, would be rejected, as would f -a 5.42 and f -c (get-process svchost). Alas, that would still falter on certain degenerate cases. f -b "", for example, arguably should be rejected but it is not.

Yes, there are other pitfalls out there that did not make it on this list! There are two possible reasons why:

  1. Deliberate omission due to too convoluted, arcane, or peculiar. Perhaps an issue might affect just five people out there. Sorry if you are one of those five! But I needed to focus on what would affect-and thus interest-the most people.

  2. Inadvertent omission because I never got snared by it and/or I missed it in my research for this article.

If you know of a pitfall that might affect many people, then by all means share it in a comment below!

How you log in to Simple Talk has changed

We now use Redgate ID (RGID). If you already have an RGID, we’ll try to match it to your account. If not, we’ll create one for you and connect it.

This won’t sign you up to anything or add you to any mailing lists. You can see our full privacy policy here.


Simple Talk now uses Redgate ID

If you already have a Redgate ID (RGID), sign in using your existing RGID credentials. If not, you can create one on the next screen.

This won’t sign you up to anything or add you to any mailing lists. You can see our full privacy policy here.