Ricky Leeks presents:

Top 5 .NET memory management fundamentals

1. What happens to small objects?

Small .NET objects are allocated onto the Small Object Heaps (SOH). There are three of these: Generation 0, Generation 1, and Generation 2. Objects move up these Generations based on their age.

New objects are placed on Gen 0. When Gen 0 becomes full, the .NET Garbage Collector (GC) runs, disposing of objects which are no longer needed and moving everything else up to Gen 1. If Gen 1 becomes full the GC runs again, but also moves objects in Gen 1 up to Gen 2.

A full GC run happens when Gen 2 becomes full. This clears unneeded Gen 2 objects, moves Gen 1 objects to Gen 2, then moves Gen 0 objects to Gen 1, and finally clears anything which isn't referenced. After each GC run, the affected heaps are compacted, to keep memory which is still in use together.

This generational approach keeps things running efficiently – the time-consuming compacting process only occurs when absolutely necessary.

Note: if you see a high proportion of memory in Gen 2, it's an indicator that memory is being held onto for a long time, and that you may have a memory problem. This is where a memory profiling tool, such as ANTS Memory Profiler, can come in handy.

2. What happens to larger objects?

Objects larger than 85 KB are allocated onto the Large Object Heap (LOH). They aren't compacted, because of the overhead of copying large chunks of memory. When a full GC takes place, the address ranges of LOH objects not in use are recorded in a free space allocation table instead.

When a new object is allocated, this free space table is checked for an address range large enough to hold the object. If one exists, the object is allocated there, if not, it's allocated at the next free space.

Because objects are unlikely to be the exact size of an empty address range, small chunks of memory will almost always be left between objects, resulting in fragmentation. If these chunks are less than 85 KB, there's no possibility of reuse at all. Consequently, as allocation demand increases, new segments are reserved even though fragmented space is still available.

Furthermore, when a large object needs to be allocated, .NET tends to append the object to the end anyway, rather than run an expensive Gen 2 GC. This is good for performance but a significant cause of memory fragmentation

3. The Garbage Collector can be run in different modes to optimize performance

.NET solves the trade-off between performance and heap efficiency by providing multiple modes for the GC.

Workstation mode gives maximum responsiveness to the user and cuts down pauses due to GC. It can run as 'concurrent' or 'non-concurrent', referring to the thread the GC runs on. The default is concurrent, which uses a separate thread for the GC so the application can continue execution while GC runs.

Server mode gives maximum throughput, scalability, and performance for server environments. Segment sizes and generation thresholds are typically much larger in Server mode than Workstation mode, reflecting the higher demands placed on servers.

Server mode runs garbage collection in parallel on multiple threads, allocating a separate SOH and LOH to each logical processor to prevent the threads from interfering with each other.

The .NET framework provides a cross-referencing mechanism so objects can still reference each other across the heaps. However, as application responsiveness isn't a direct goal of Server mode, all application threads are suspended for the duration of the GC.

4. Weak references offer a compromise between performance and memory efficiency

Weak object references an alternative source of GC roots, letting you to keep hold of objects while allowing them to be collected if the GC needs to. They're a compromise between code performance and memory efficiency; creating an object takes CPU time, but keeping it loaded takes memory.

Weak references are particularly suitable for large data structures. For example, imagine you have an application that allows users to browse through large data structures, some of which they might return to. You could convert any strong references to the structures they have browsed into weak references. If users return to these structures, they're available, but if not the GC can reclaim the memory if it needs to.

5. Object pinning can create references for passing between managed and unmanaged code

.NET uses a structure called GCHandle to keep track of heap objects. GCHandle can be used to pass object references between managed and unmanaged domains, and .NET maintains a table of GCHandles to achieve this. There are four types of GCHandle, including Pinned, which is used to fix an object at a specific address in memory.

The main problem with object pinning is that it can cause SOH fragmentation. If an object is pinned during a GC then, by definition, it can't be relocated. Depending on how you use pinning, it can reduce the efficiency of compaction, leaving gaps in the heap. The best policy to avoid this is to pin for a very short time and then release.

Bonus tip: Don't guess what's eating up your memory – use a profiler

A memory profiler will show you which code is hogging memory, and where you have memory leaks. This makes it much quicker to pinpoint the code that contributes most to the memory problem, so you can start fixing it. Here's an article, with screenshots, that explains how to use ANTS Memory Profiler to do this for a WinForms application, but ANTS Memory Profiler also works with ASP.NET web applications, Windows services, Silverlight, and Sharepoint applications. You can find out more about it, and grab a free trial, here.