Asynchronous Processing in .NET Part 2

Comments 10

Share to social media


In Part 1, we looked at the fundamentals of asynchronous processing in .NET, and we saw that subtle bugs known as race conditions can occur if threads are not correctly synchronized. We have no way of knowing at what point the operating system will switch execution between threads: not only could this occur between any two lines of code in our program, but – because even the simplest line of .NET code will generally compile to multiple native instructions – while a single line is in the process of executing. Synchronization is the means by which we protect ourselves from any unpredicted effects resulting from this sudden switching of threads. In this article, we’ll take a more in-depth look at synchronization and the classes provided by .NET for synchronizing processing on multiple threads.

.NET Synchronization Objects

Synchronization objects allow us a degree of control over the order in which the operating system executes specific blocks of code on different threads. In general terms, they fulfil two fundamental functions:

  1. To prevent multiple threads accessing the same piece of data or running the same block of code simultaneously
  2. To control the sequence in which code is executed by different threads.

Typically, they are used to ensure the integrity of data, as this could be compromised if two threads attempt to modify the same piece of information at the same time, and the accuracy of the information, as we may not get a repeatable read if we try to read data while another thread is in the process of setting it. However, they can also be used to choreograph threads to a much finer degree. As an illustration of this, the second half of this article will be dedicated to an example that displays cars travelling over a crossroads, which uses synchronization objects to ensure that cars don’t crash into each other at the junction.

.NET provides an impressive array of synchronization objects that we can use to achieve these tasks, shown in the table below:




A lightweight object typically used to lock access to resources shared across threads to prevent race conditions. This is the most commonly used synchronization object in C#, through the lock() statement.


A more heavyweight kernel object that allows cross-process synchronization, so we can use it to protect resources that are shared not just by threads within a process, but by multiple processes. It could be used, for example, to protect a single data source shared by multiple instances of the same process.


Provides methods that perform operations on a value in a way that is guaranteed to be atomic, and so avoids the possibility of race conditions without the need for locks, including specialist methods for incrementing and decrementing integer values.


Used to protect access to a resource that can safely be read by a number of threads at the same time, but must be protected from being accessed by other threads when one thread is modifying it. This will cause substantially less blocking for data structures that are frequently read, but relatively rarely modified.


Used to signal to other threads when one thread has completed a certain action. Whereas Monitor and Mutex (and the writer lock of a ReaderWriterLock) block execution of protected code while one thread owns the lock, reset events use signalling to tell other threads whether they can proceed, so if the event is signalled, all threads can proceed, and if not, none can. We use manual reset events in the example to signal whether the junction is clear for cars to enter it horizontally or vertically.

Each of these classes provides specific functionality for different synchronization scenarios, and while you’ll use the Monitor class most frequently, it’s important to be aware of what the other objects offer.


We met the Monitor class in Part 1, when we looked at the C# lock() statement, as this is the object that underpins it. The Monitor class is used to mark a block of code as a critical section, which cannot be entered by multiple threads that are locked using the same object. This object must be a reference type, but that is the only restriction. Typically, the lock is obtained on the current object (the class in which the code resides), which will prevent multiple threads executing that code on the same instance of the class.

For value types, one possibility is to obtain the lock on the corresponding Type object, though there is a danger with this approach of hurting performance by blocking threads needlessly, if the same Type object is used to lock sections that can be executed simultaneously. For example, if we were to lock access to the properties of a struct called MyStruct using typeof(MyStruct), then not only would this block other threads accessing those properties in the same instance of the struct, it would also block access to any instance. For this reason, you should be very careful about locking on a Type object. Alternative approaches to consider include:

  • Using the Interlocked class (discussed later)
  • Locking on a reference-type field of the struct instead
  • Creating a new object instance to lock on.

Enter, TryEnter and Exit

In addition to the lock() syntax we saw in Part 1, we can obtain a Monitor lock by explicitly calling one of the following static methods:

  • Enter()– obtains the lock on the specified object, blocking until the lock is available.
  • TryEnter – as well as the object to lock on, TryEnter() can take as parameters either an integer or a TimeSpan object, and will block until it can obtain the lock, or for the stipulated number of milliseconds. If the lock is obtained, it returns true; otherwise, it returns false. If this additional parameter isn’t specified, the method will return immediately.

To release a lock once it’s been obtained, call the Exit() static method. Since it’s important to ensure that the lock is always released, even if an exception occurs between calling Enter() or TryEnter() and Exit(), you should include the entire critical section within a try block, placing the call to Exit() in the associated finally block, as happens implicitly with a lock() statement:

This use of the Monitor is pretty straightforward, but the Monitor class is also capable of more sophisticated synchronization.

The Wait() method

The Wait() method allows us to pause execution on the thread that owns the lock while it waits for other threads to complete their tasks. The Wait() method releases the lock, and blocks the thread either until it obtains it again, or until a specified timeout period elapses. When Wait() is called, the thread it is executing on is placed in a waiting queue, where it will remain until Pulse() or PulseAll() is called for the same object. Pulse() moves the first waiting thread onto the ready queue, while PulseAll() moves all waiting threads. If the owner of the lock doesn’t call Pulse() or PulseAll(), any waiting threads won’t be moved to the ready queue, and therefore will never resume execution.

The following short console example uses the Wait() and Pulse() methods to count to 10 on two separate threads, ensuring that each successive increment of the count occurs on a different thread. The example creates two threads (which are given names so that we can identify them), and starts them running the same method, called Count(). This method obtains the Monitor lock for the current class instance, calls Pulse(), and then enters an infinite loop. For each iteration of this loop, we call Wait() to allow the other thread to execute, increment an integer field called count, and return from the method if its value is greater than 10. We then print out the new value of this field, together with the name of the thread on which the method is currently executing. At the end of each iteration, we call Pulse() to move the other thread to the ready queue, so that it will be able to execute when Wait() is called on the next iteration of the loop. This call to Pulse() is included in a finally block to ensure it’s always executed.


The Mutex is similar to the Monitor, in that it too is used to mark a critical section, but unlike the Monitor, the Mutex doesn’t lock on another object. Instead, the Mutex itself is used as the lock, so multiple threads can’t enter a section of code which is locked by the same Mutex. Moreover, because mutexes are identifiable by name, they can be used to perform synchronization across process and application domain boundaries. This means that you should name mutexes very carefully to avoid unintentionally blocking or being blocked by another, unrelated process.

We can specify, when we instantiate the mutex, whether we want to attempt to acquire ownership of it, and, if so, the constructor can also populate an output variable that indicates whether the thread does in fact have ownership of the mutex. For example:

Here the first parameter is set to true to indicate we want ownership of the mutex, the second contains the name of the mutex to create, and the third is an output bool parameter that will be set to true if we did get ownership of the mutex, or false if the mutex is already owned by another thread. If the first parameter is set to false, so initial ownership isn’t requested, and the third parameter is still included, it will be set to true if the mutex was successfully instantiated, even though the current thread doesn’t own it.

Once the mutex has been instantiated, we can call the WaitOne() method to acquire ownership of the mutex. This can be called with no parameters, in which case it will block until ownership can be obtained; alternatively, we can pass in either a TimeSpan object or an integer representing the number of milliseconds to block for, together with a bool that specifies whether or not we should exit and reenter the synchronization domain (a synchronization domain is the synchronization context for a context-bound object; code in the same instance of a class in a synchronization domain isn’t permitted to execute simultaneously on multiple threads). The method returns a bool value indicating whether or not we did actually acquire ownership of the mutex.

The following simple console example attempts to get ownership of a named mutex. If it succeeds, it retains ownership until the user presses the return key; if it fails, it calls WaitOne() to block until it can acquire ownership. It then waits for the user to press return and releases the mutex. This program can be run from two (or more) separate command windows to demonstrate that the named mutex can be used across process boundaries: if one instance of the program already has ownership of the mutex, another instance won’t be able to acquire it until the user has pressed return in the first window.

Mutex derives from the WaitHandle abstract class, and therefore also exposes the WaitAny() and WaitAll() static methods of this class. These methods block until respectively any one or all of an array of wait handles is signalled; we’ll look at them when we discuss AutoResetEvent and ManualResetEvent, which also derive from WaitHandle.


The Monitor and the Mutex both share one potential disadvantage: they block any access to a resource, regardless of whether it is being written to or merely read. However, it’s very rare (if it happens at all) that you want to prevent multiple threads reading data simultaneously. Because the data isn’t being modified, both threads will receive the same value. For example, suppose you want to retrieve an item from a collection. So long as the collection itself isn’t being modified, it’s safe to allow any number of threads to retrieve items from it. However, if you want to add or delete an element, you need to ensure that no other thread can access the collection while the operation is in progress.

It’s for just such tasks that the ReaderWriterLock is intended. When we obtain a ReaderWriterLock, we need to specify whether we want to read or write to the protected resource. Multiple readers will be allowed to access the resource simultaneously, but no other locks can be held when a thread has the writer lock.

To acquire a lock, we call either the AcquireReaderLock() or the AcquireWriterLock() method as appropriate of the ReaderWriterLock instance. Both of these methods require us to pass in the timeout period either as a TimeSpan object or as the number of milliseconds. If the lock cannot be acquired within this period, an ApplicationException will be thrown.

We’ll see the ReaderWriterLock in action in the example that forms the second part of this article.

Reset Events

AutoResetEvent and ManualResetEvent are two closely related classes that allow us to signal when a thread has completed a particular task and another thread can proceed. Reset events exist in either the signalled or the non-signalled state. The signalled state indicates that any threads waiting for the reset event can proceed, while they will block if the event is in the non-signalled state.

To set a reset event to the signalled state, we call its Set() method, and to reset it to non-signalled, we call Reset(); both of these methods take no parameters, and return a bool value indicating whether or not the action succeeded.

The difference between the two classes is that an AutoResetEvent will automatically be reset to the non-signalled state after any waiting thread has been released (that is, as soon as a call to WaitOne(), WaitAny() or WaitAll() returns), whereas a ManualResetEvent will remain signalled until it is manually reset.

As both AutoResetEvent and ManualResetEvent derive from the same WaitHandle abstract class as Mutex, they expose the same WaitOne() instance method (which blocks until the instance it is called on is set to the signalled state), and the WaitAll() and WaitAny() static methods, which block until all or any single one respectively of the array of wait handles passed in is set to the signalled state. The console application below demonstrates the use of WaitAll() and WaitAny(). It starts off 10 threads, each of which is associated with an AutoResetEvent. The threads just sleep for a random period, before setting their associated reset event. Meanwhile, the main thread calls WaitAny() to block until the first of the reset events is signalled. After writing a message to the console window, we then call WaitAll(), which will block until all of the reset events have been signalled.


Quite frequently, we want to perform simple arithmetic operations in a thread-safe way. Generally speaking, even a simple increment will compile to more than one native instruction, so race conditions are possible if the operation is not performed atomically. These operations can be done easily, albeit somewhat heavy-handedly, using a Mutex. However, they can’t be performed efficiently with the more lightweight Monitor class, because we can’t lock on a value type. Instead, we would have to lock on either the parent class or the Int32 type class. In either case, the same object could be used to protect other resources, so unnecessary blocking could occur.

The Interlocked class goes some way to solving these problems by providing atomic methods for incrementing and decrementing int and long values, for setting a variable to a new value, and for comparing the value of a variable with another value, and replacing it with a third value if they are equal. All these methods are static, and in each case the variable that will potentially be modified is passed in as a reference parameter. Because these methods guarantee atomicity without locking, they are efficient and have low overhead.

The following table shows the method syntax and the equivalent non-atomic operation. In each case, the parameters are all of the same type, so the supported types for each overload are also shown:


Overload Types

Non-atomic Equivalent

Increment(ref location)

int, long


Decrement(ref location)

int, long


Exchange(ref location, value)

int, object, float

location = value;

CompareExchange(ref location,
value, comparand)

int, object, float

if (location == comparand)
location = value;

Synchronization Bugs

In part one, we saw how inadequate synchronization can result in bugs called race conditions that can compromise the data integrity of your application. Unfortunately, adding synchronization code isn’t a cure-all, but can in fact produce additional bugs.


The most common type of bug arising specifically from incorrect (as opposed to non-existent) synchronization is the deadlock. A deadlock occurs, in the simplest scenario, when one thread, which already owns a lock on a particular object, tries to obtain a lock on another object, which has meanwhile been obtained by another thread, which is now attempting to get a lock on the first object. Neither thread can proceed, because both are waiting for the locks to be released, but the locks can’t be released until one or other thread ceases to block. Result – deadlock; and if this doesn’t make your program hang completely, it will certainly hang at least two of the threads in it.

Of course, the chain can be a lot more complex than this, involving many more threads locking on different objects: for example, thread 1 obtains a lock on object A and attempts to acquire a lock on object B; meanwhile thread 2 obtains a lock on object B and then attempts to lock on object C, while thread 3 locks on C and then attempts to lock on A. The more complex the chain, the less frequently the thread switching will occur at just the wrong time to create a deadlock, and the harder the deadlock will be to track down and fix.

The following example demonstrates a deadlock. To be fair, a short code sample isn’t going to get a deadlock unless you’ve gone out of the way to create one, so the example can’t help but be contrived. The example consists of two classes, imaginatively called ClassA and ClassB. In its constructor, ClassA creates a reference to a ClassB object, and this in turn retains a reference to its parent ClassA instance. ClassA has two public methods, PrintClassBValue(), which increments an internal integer field of ClassB and writes it to the console, and SetName(), which sets the value of a private string field. Both of these methods lock on the current class instance. We’ve also added a call to Thread.Sleep(0) to ensure that thread-switching occurs at the (in)opportune moment to ensure a deadlock:

ClassB, meanwhile, contains as well as a reference to its parent ClassA instance a private integer field set to an arbitrary (but quick-to-type) value. It, too, has two public methods: IncrementValue(), which increments the value of the integer field; and SetParentName(), which calls the SetName() method of its parent ClassA instance. Again, both methods lock on the class instance:

The Main() method for the program instantiates a ClassA object, and then spawns two new threads to call its PrintClassBValue() method and the SetParentName() method of its child ClassB instance:

What happens, of course, is that when PrintClassBValue() is called, the first thread obtains a lock on the ClassA instance. Meanwhile, thanks to the call to Thread.Sleep(), the second thread starts executing the SetParentName() method and obtains a lock on the ClassB instance, so when the first thread comes to the IncrementValue() method of ClassB, it’s unable to obtain the lock. Similarly, when the second thread reaches the SetName() method of ClassA, it finds that the lock on our ClassA instance is already owned by the first thread, and so blocks. We therefore achieve our deadlock, and the program hangs.

Obviously, no one would write code quite like that in real life. The danger occurs when you’re calling into methods in a class written by someone else, and don’t realize that locking is occurring on that object, and this has been used as an argument against locking on this at all. However, this seems a little extreme: locking on this is often the only solution without the overhead of creating an object purely for the lock, and the risk only occurs where two objects directly or indirectly hold references to each other.

Deadlocks may seem much more obvious than race conditions, as they can result directly in the entire program hanging, rather than “mere” data corruption, but they can be just as difficult to track down and replicate: the deadlock will only occur if thread switching occurs at just the wrong moments (hence the need for the calls to Thread.Sleep(0) in the above example). One possible way to avoid them is always to lock on objects in the same order (for example, if you’re locking on objects A and B in multiple places in your code, always lock on A first and then B). This will help if the deadlock is due to your own locking strategy, but may not if the deadlock is caused by a lock in a third-party library.


When we create a thread, we can set its Priority property to one of the ThreadPriority values (Lowest, BelowNormal, Normal, AboveNormal or Highest). The operating system will give precedence to higher-priority threads and allow them more execution time. If multiple threads of different priorities are competing for the same protected resource, this can result in the lower-priority threads being starved of access and unable to progress.

The code sample below demonstrates the dangers of allowing threads of unequal priority to compete for the same protected resource. We start off two threads, one with the highest priority and one with the lowest, to execute two methods called CountFast() and CountSlow(). These methods simply iterate through a loop, increasing a counter each time, until they have reached a certain number (5000 for the fast thread, 100 for the slow thread). On each iteration, the threads wait for ownership of a mutex, increment and print out the value of the appropriate counter, and then release the mutex.

When you run this, you should find that the high-priority thread manages to get access to the mutex every time, and the low-priority thread doesn’t execute at all until the higher one has got to 5000 and finished. In this case, the situation is exacerbated by the fact that we’re doing so little processing on the threads, so the fast thread is always ready when the scheduler allocates execution time, and therefore the higher-priority thread is given the nod over the lower priority one. For this reason, it’s advisable to avoid creating threads with different priorities that are continually contesting for ownership of the same resource. If this can’t be avoided, you can call Thread.Sleep(0) at occasional intervals to allow other threads execution time.

Priority Inversion

Starvation is not the only bug that can arise when threads of different priorities attempt to access a shared resource. It can also happen that a lower priority thread can block a higher priority thread, and thus effectively invert the priorities of the two threads. In the simplest case, this is just a matter of a low priority thread gaining access to a protected resource before a high priority thread, obtaining ownership of the lock, and thus blocking the high priority thread when it attempts to obtain the lock. There isn’t any way of ensuring this won’t happen, but this type of situation may not be too serious if (as it should) the lower priority thread releases the lock as soon as possible.

A more serious situation can occur when the inversion is indirect. Suppose, in addition to the two threads in the scenario above, there is a third thread of normal priority. This starts just after the low priority thread has gained ownership of the lock, and because it has a higher priority than that thread, the operating system schedules it to run. When the high priority thread then starts, it is scheduled to run, as it has a higher priority still. However, when it attempts to acquire ownership of the lock, it will block, because the lock is still owned by the low priority thread. The operating system will therefore allow the thread with the next highest priority to run – the normal priority thread – so the low priority thread is unable to resume execution and release the lock. Thus in effect the normal priority thread is blocking the high priority thread.

As with direct priority inversion, there isn’t really much we can do to prevent this, beyond the obvious good practice of releasing locks as quickly as possible, although the operating system makes some compensation for it. If you found you were having serious performance problems due to priority inversion, you could use other synchronization objects such as reset events to ensure either that the high priority thread takes ownership of the lock before the low priority thread, or that the normal priority thread waits for the low priority thread to release the lock before continuing. This approach, of course, would bring its own problems, as it would create additional overhead and complicate the synchronization code, which could potentially introduce other synchronization bugs.

Synchronization Example

To see synchronization in action, we’ll look at a Windows Forms application that controls cars, and prevents them from colliding as they progress through a crossroads within a panel control (note that graphic design isn’t really my forte ;)).

Each car is controlled by a separate thread. The user can control the traffic density by specifying the frequency with which new cars will arrive, as an integer between 1 and 7 (at density 7 cars arrive more frequently than they can leave, so tailbacks will continue to build up until the density is lowered). To see it in hypnotic action, simply double-click the SynchronizationExample.exe that you can find in the Release folder of the Code Download for this article.

The bulk of the GDI+ code used to draw the cars and background has been omitted from the description below, to focus on the code that’s more relevant to the task in hand. However, the full code can be viewed in the code download. The Code Download link can be found in the box to the right of the article title.

The interface is shown in the following screenshot (note that cars are added off-screen, so the total number of cars can be greater than the number visible on screen):


Within the junction, we need to make sure that cars can be travelling only in one axis (either horizontally or vertically), to avoid collisions. To do this, we use a pair of ManualResetEvents, which will signal when the junction is empty, so that the threads controlling cars on the other axis can resume. We will also use a System.Threading.Timer to cause the roads to be repainted periodically, rather than refreshing the display every time a car moves (this is the approach we took in the example in Part 1, but is less efficient if there are many objects, each causing the control to refresh). Finally, we’ll also have the inevitable lock() statements for data that’s shared across threads, and a ReaderWriterLock to ensure the integrity of the ArrayList that keeps track of all the cars we create.

The first thing we need to do is define an enumeration to identify which direction a given car is travelling in:

Each car is represented by a Car instance. The Car class is nested within the main form class for the application (called SyncExampleForm), and has private fields that represent the colour of the car, the direction in which it’s travelling, its position on the road, and whether or not it has been disposed of (this will happen when the car disappears off the far edge of the control). We also need references to the GDI+ Brush objects that are used to draw the car, to the parent form, and to the car ahead (so we can avoid a crash if the car in front stops at the junction). The Car class also has two static fields referencing the ManualResetEvent objects that are used to signal when the junction is clear for cars to enter horizontally or vertically:

When a Car instance is created, we pass in the colour for the car, the direction it’s moving in, and references to the parent form and the car ahead, and store these in private fields. We then set its initial position, just off the screen, depending on which direction it’s travelling in:

To avoid running into the car in front, we’ll need to expose its position and whether or not it’s been disposed of as public properties:

The Dispose() method is called when the car exits the screen, and simply disposes of the two brushes used to draw the car and sets the disposed field to true, so that the following car knows it doesn’t have to worry about crashing into this one:

The task of moving the car across the control is performed by the Start() method, which is called on a newly created thread. This method executes an infinite loop; on each iteration, we first call Thread.Sleep() to stop the cars racing invisibly across the screen. Then we check the position of the car in front to ensure that we don’t run into it. We read this value within a lock() statement to ensure that the value we get is consistent, but we don’t need to worry about it changing after we’ve read it, because the cars only go forwards, not backwards. If the car ahead is within 35 pixels of the current car, we call continue to exit the current iteration of the loop and send the thread to sleep for a bit longer:

Otherwise we increment or decrement the position of the car to move it forwards, and then check the new position of the car against certain values, to work out whether the car is about to enter or leave the junction, or has disappeared off the control. These values vary, depending on the direction in which the car is travelling, so these conditions are wrapped up in a switch statement. If the car is about to enter or leave the junction, we call the appropriate method; if it has gone off the edge, we call the Stop() method to dispose of the car, and then return from the method. This will cause the thread to terminate:

The parent form keeps track of all the cars that have been created using an ArrayList called cars. Within the Stop() method, we therefore need to remove the car from this. An ArrayList supports multiple readers, but will throw an exception if there’s an attempt to access it while it’s being modified. This is a job for ReaderWriterLock! Since we’re modifying the ArrayList, we need to acquire the writer lock so that no other thread can read or write to the array till we’re done. Note that we release the lock within a finally block to ensure that the lock is released even if an exception is thrown. When this is done, we dispose of the car.

When a car is about to enter the junction, it must first wait until the relevant ManualResetEvent is signalled: vertMre for cars moving up or down, or horizMre for those travelling to the left or right. This indicates that there are no cars in the junction travelling perpendicularly to the current car. Then we immediately reset the other ManualResetEvent so that no cars coming at right angles will be able to enter the junction. Finally, we increment the carsInJunction field of the parent form within a lock() statement (to ensure that we don’t get a false value if it’s read at the same time):

When a car exits the junction, we need to reduce this carsInJunction field by one, and if the new value is zero, set the appropriate ManualResetEvent, so that cars coming in the other axis can now enter the junction. All this code is placed within a lock() statement on the parent form object, so that the number of cars in the junction can’t be altered by another thread between the value being decremented and the ManualResetEvent being set:

The final method in the Car class is called Draw(), and it’s used to draw the car in the panel control. This is called on the main UI thread from the Paint event handler method of the control, and takes the Graphics object which will be used to draw the car. As it just contains GDI+ code, we won’t show it here, but if you’re interested, you can examine it in the code download.

In addition to the Windows Forms controls, the main form has 22 private and internal fields. Over half of these are used for drawing the roads, cars and background, including: GDI+ Pen and Brush objects; a Region object representing the road area that will need to be refreshed constantly; four Rectangle objects to paint the green background; and a bool variable called firstPaint that will be used to determine whether or not we need to repaint the background (e.g. if the form is minimized and then restored). There are also two constant int values, which mark the position of the roads on the X and Y axes, and which we use to place all the objects on the control, as well as to determine when cars are entering or leaving the junction:

In addition to these, we need a class-level references to a System.Threading.Timer, which provides a way to execute a method (on a thread-pool thread) at specific intervals, and which we use to refresh the control periodically. Notice that we don’t need to call a start method on a Timer – it starts immediately upon instantiation, and will continue until it is disposed of or garbage collected. We need to keep an active reference to it, however, or it will be collected by the garbage collector, so you shouldn’t store a Timer in a local variable, unless you ensure that the method won’t return while the Timer is still needed. This is why we store our reference to it in a class field.

There are also class-level fields for an array of Car objects that stores the last car in each direction; an ArrayList containing all undisposed cars, and its synchronized wrapper; and the ReaderWriterLock that we use to control access to it. We also have two class-level integer fields, which store respectively a number used to determine the delay between each new car being created, and the number of cars currently in the junction. Note that the Timer must be stored as a field rather than a local variable, even though it will only be referenced in one method (the class constructor), to prevent it being garbage-collected by the CLR after that method has finished executing.

The constructor for the Windows Forms class starts with the usual call to InitializeComponent(), and then goes on to initialize our GDI+ brushes and pens. We also build up a GraphicsPath object containing the grey surface of the road, which will need to be refreshed as the cars move along it (but omitting the white centre markings, which will cause a performance hit if they are continually being redrawn). From the GraphicsPath object, we are able to instantiate a GDI+ Region object. The background colour of the form is that of the roads, and not the green of the remaining area. This is because the green area doesn’t change, and therefore won’t need to be redrawn unless it’s invalidated by being covered by another window. Since the background of the control is grey, we don’t need to fill in the roads themselves – a very expensive operation, which would result in an unacceptable flicker. Because of this, we need to define four rectangles for the “non-road” area, which we’ll use both to paint the background green, and to determine whether this background has been invalidated:

Next, we initialize the timer to repaint the control; this will call the RefreshPanel method every 100 milliseconds, starting immediately. When we instantiate the Timer, we pass in a TimerCallback delegate instance representing the method that will be executed periodically, an object instance that will be passed into the method as a parameter, the length of time before the method will be executed for the first time, and the interval between subsequent calls to the method. The method to be executed must return void, and take a single parameter of type object.

We then get a reference to the synchronized wrapper for our ArrayList of cars. It is this wrapper that is exposed with internal access, and thus accessible from our Car class. Finally, we start up a new background thread that will keep adding cars to the control. We need to spawn a new thread to do this, because we need the main UI thread to carry on updating the UI and responding to user input – if we called this method synchronously, we’d never get to see the form at all!

The StartCars() method provides an infinite succession of cars. We start off by creating a new Random object that we use to determine the colour and direction of the car, and also the exact delay between each car (this is based on the delay integer value, but is partly random). We then go into an infinite loop. In each iteration, we begin by instantiating a new Car object with a random colour and dirction, passing in a reference to the Car in the lastCars array for this direction:

Now we need to add this to the cars ArrayList. Because this can be read and modified from other threads, we need to lock it with our ReaderWriterLock by acquiring the writer lock. As usual, this is performed within a try block, and then released within the corresponding finally block:

Now we set the element in the lastCar array for this direction to the current car. The lastCar array has four elements, one for each direction, so we use the car’s TravelDirection enumeration value, converted to an integer, to get and set the correct element. Thus, when we create the next car going in the same direction, we will be able to pass in a reference to this car, so it will know where it is and be able to avoid crashing into the back of it. We then start the car moving by calling its Start() method on a new background thread:

Finally we just need to wait a bit with a call to Thread.Sleep() before the next car is added with the next iteration of our infinite loop. We’re allowing the user to change the value that determines this delay, so this value could be altered from the main UI thread. We therefore need to lock the value to ensure we get a consistent reading. However, we don’t want to lock it for the duration of the call to Thread.Sleep(), as this could cause the main UI thread to block while it’s waiting to set the value. To get round this, we read the value into a local variable within a lock() statement, and then use this variable to determine how long to sleep for. The exact value is a random value between delay and 2 * delay. The object used in the lock() statement is the Type object for the int type, as we can’t lock on a value type. We don’t use the form itself, as that’s been used to lock sections that access the carsInJunction field. Locking on a Type object won’t block unnecessarily here, because we only use it to lock access to the tmpDelay field, and this belongs to the main form, of which there is of course only one instance.

The RefreshPanel() method is called every 100 milliseconds on a thread-pool thread by the timer. Here, we invalidate the labels indicating how many cars there are in total and in the junction and the GDI+ Region object that contains the grey areas of the roads, forcing them to be redrawn:

When the labels are invalidated, their Paint events will be fired. In the Paint event handler for the total number of cars label, we acquire the reader lock before accessing the Count property of the cars ArrayList, because this ArrayList could be modified on another thread:

In the Paint event handler for label4, we also place the code to read the carsInJunction field in a lock() statement, to ensure we read the correct value:

The Dispose() method simply disposes of the GDI+ brushes, the timer and the active cars, before the usual Windows Forms code to dispose of any components:

The Paint event handler of the panel control uses the ClipRectangle property of the PaintEventArgs passed into it to see whether any of the green background has been invalidated. If it has, we set the firstPaint bool field to true, to indicate that we need to redraw the background and the centre markings on the road. This field will also be true the first time the method is called. When we’ve drawn the background and markings, we set firstPaint to false so that next time round, we’ll conserve resources by not painting them. We will always redraw the yellow junction markings, as these are included in the road area that is always invalidated. The GDI+ code for this method has been removed to keep the article down to a (more) manageable size; again, please see the code download for the full source.

Finally, we draw the cars. To avoid a car being added to the ArrayList while we’re iterating through it, we need to acquire the reader lock of our ReaderWriterLock object:

The last method in our code is the Click event handler for the button that the user presses to change the traffic density. We use the figure in the updown box on the form to calculate a new value for the delay field. As before when we read this value, we lock this section with typeof(int):


In this article, Part 2 of a three-part series, we drilled down from the previous instalment to look at one specific aspect of asynchronous processing: synchronizing the interaction between the threads in a multi-threaded application. This is itself a huge topic, and it hasn’t been possible to look in detail at everything. .NET provides a large number of different objects to help us synchronize our threads, each with a different role, and we spent the first part of the article looking at these. We then looked at some of the bugs that can afflict multi-threaded programs, before looking at a sample Windows Forms application that uses a number of different synchronization objects. Now that you are hopefully fully au fait with the general issues of asynchronous programming in .NET, we’ll dedicate the final article in the series to the new features introduced in .NET 2.0, with special emphasis on the new background worker component.

About the author

Julian Skinner

See Profile

Julian Skinner is a freelance C# programmer and technical author. He studied Germanic etymology to PhD level before learning computer programming while working first for Wrox Press and then for Apress. He is a co-author of The Programmer's Guide to SQL, Pro SQL Server 2005 and Pro SQL Server 2005 Assemblies. You can contact him through his website at

Julian Skinner's contributions