Imagine a candle that is lit and takes 1 hour to burn out. Now imagine one hundred candles. How many hours will it last? That depends. If they are lit simultaneously, it will take 1 hour.
That is the basic idea of running in the background or asynchronously. Of course, the 100 candles can execute independently of one another, unlike if you try to run 100 processes on a computer with 2 cores.
PowerShell has some ways to manage that, as PowerShell job – which we will see in this article – runspaces – needs to add programmable using .net.
PowerShell Jobs
PowerShell Jobs is a valuable tool that allows you to run commands or scripts in the background without interrupting the current session. It also provides a way to run processes asynchronously. This means you can start a job and then move on to other tasks while it is completed in the background. You can manage and control these jobs within your PowerShell session.
PowerShell can concurrently execute scripts and commands using jobs. There are three job types that you can use to support concurrency.
- RemoteJob – Used to run commands and scripts run on a remote session.
- BackgroundJob – Commands and scripts run in a separate process on the local machine.
- PSTaskJob or ThreadJob – Commands and scripts executed in a separate thread within the same process on the local machine.
In this article, I will focus on use BackgroundJobs
, but there are similarities to each, other then the context in which they are executed.
Starting a job on the local computer
To create and start a job on the local computer, use the Star-Job cmdlet. It takes a variable called the script block as a parameter.
What is a scriptblock?
In PowerShell, a script block is a collection of statements or expressions that can be used as a single unit. The collection of instructions can be enclosed in braces ({}), defined as a function, or saved in a script file. A script block can return values and accept parameters and arguments.
The following command starts a background job that runs a Get-Service
cmdlet on the local computer.
As it runs in the background, start-job
returns information about the job that was created, and you can continue your work in the session without interruption while the job runs.
The information about the job is automatically shown as the ID, Name
– if you don’t name your job, it will create a name JobX
– and more information about the job.
Getting the result of a job
The result does not appear immediately when you run a background job, as it works in another context. To get the result, you need to use Receive-Job
. As we saw in the start-job result, the job ID
is 7, so we can use this ID
.
Working with Receive-Job
If we try to rerun Receive-Job
, it will not show anything.
That happens because, by default, Receive-Job
details the results from the cache. When you run Receive-Job
again, you get only the new results after the first run.
To prevent that happening, use the Keep
parameter.
This way, you can use receive-job multiple times without deleting the cached data.
You can also use the Wait
parameter of the Receive-Job
cmdlet. When using this parameter, the cmdlet does not return the command prompt until the job is completed and all results are available.
Getting a list of jobs
You may have jobs that take a long time to finish. Using the Get-Job
cmdlet, you can get a list of the jobs running or completed.
First, let’s run a job that will take a long time to finish.
Now, let´s get the job list and its status.
You can also use the Wait-Job
cmdlet to wait for any or all the job results. Wait-Job
lets you wait for one or more specific jobs or for all jobs.
As a result, the PowerShell prompt is suppressed until the job is completed.
Stopping a Job
To stop a background job, use the Stop-Job
cmdlet. Let’s stop the job that is running.
Deleting a Job
To delete a background job, use the Remove-Job
cmdlet.
Let´s remove all jobs.
I also used the -force parameter to stop the job from running.
Why did my job fail?
Jobs can fail for many reasons. The job object contains a Reason
property that will provide you with information about the cause of the failure. First, let’s store in a variable $job
the result of the start-job
with a failed option because I increased the time of the start-sleep
parameter to more than int32
Sending values to a job
As the background job runs in another context, all the variables in the script are not visible in the job. We need to pass as Argumentlist
parameter and in the script block add param($variable)
.
Being notified when a job finishes
We can do that, but we need to use another cmdlet register-objectevent
. We will detect the change at the start of the job
1 2 3 4 5 |
$job = start-job -ScriptBlock {param($seconds) Start-Sleep $seconds} -ArgumentList 10 Register-ObjectEvent -InputObject $job -EventName StateChanged -MessageData $job.Id -SourceIdentifier Job.Monitor -Action { $Global:t = $event Write-Host ("Job ID {0} has changed from {1} to {2}" -f $t.sender.id,$t.SourceEventArgs.PreviousJobStateInfo.State,$t.SourceEventArgs.JobStateInfo.state) -ForegroundColor Green -BackgroundColor Black } |
Using split-job
Split-job
is a community function that works with runspaces, and IMHO is the best way to run your PowerShell code asynchronously with runspaces. The original can be found here, but I made some changes and customized it. You can download that code in a .zip file here from the Simple Talk site.
I would use split-job instead of PowerShell jobs because unlike PowerShell jobs I can set the number of concurrency parallel runspaces and not bloating the queue o jobs. For instance, I would need a code to control the parallel jobs and the queue to not bloat the jobs queue. Using split-job I just need to pass the parameter -maxpipelines
that will say to me how any concurrency parallel jobs I want per time. Another point, as PowerShell jobs, it uses runspaces but explicitly in the code and not a black box like PowerShell hobs
You need to execute the split-job
cmdlet before the main foreach-object that runs the process. For instance,
1 2 3 4 5 |
1..200 | Split-Job { ForEach-Object { $_ }} |
When you set the maxpipelines
parameter, each session runs in another runspace and the variables, functions are only visible in your runspace As it runs in another runspace all the variables that you want to pass to the main foreach-object
you need to pass as parameter
1 2 3 4 5 6 |
$var = 10 1..200 | Split-Job { ForEach-Object { "$($_) and $var" }} -Variable var |
You can also control the number of pipelines sent to the system by using the parameter maxpipelines
. That means that 50 runspaces will be opened asynchronously
1 2 3 4 5 6 |
$var = 10 1..200 | Split-Job { ForEach-Object { "$($_) and $var" }} -Variable var -MaxPipelines 50 |
If you need to pass functions, you can use function parameter
1 2 3 |
function Test { write-host 'I am a function outside of the runspace' } |
Then you can use it in the following code.
1 2 3 4 5 6 7 |
$var = 10 1..200 | Split-Job { ForEach-Object { "$($_) and $var" test }} -Variable var -MaxPipelines 50 -Function test |
I did an example . a function that gets the computer from a specific AD group and uses the split-job to test if some services and processes are running.
It is just an examples how powerful running asynchronous can be. If you have hundreds of servers to test as I do, you will want to run this way You can find that in the zip file where the split-job code is found here from the Simple Talk site.
Load comments