Ricky Leeks presents:

5 Tips for understanding .NET interoperability

5 Tips for understanding .NET interoperability

Interop. The term alone can inspire fear, confusion, and uncertainty in many software developers. But there's really nothing to fear from interoperability between managed and unmanaged code, especially if you learn a bit about what's going on under the hood when you write interop code.

Indeed, Windows 8 introduces a new interface to the operating system called Windows Runtime (WinRT), which, at its core, is a friendlier, better, .NET-like version of COM. Regardless, Interop code of either sort lets you access functionality in existing native and COM libraries that is not exposed by the .NET Framework. With it, you can take advantage of the speed and ease of .NET without needing to rewrite existing code that your projects depend on.

1. Understand the difference between references and pointers

References are both more and less than pointers. While they both primarily hold memory addresses, they differ in the information they provide and the operations they enable.

A pointer is a variable that holds a memory address, but it has no concept of what you might find at that memory address. You can use pointer arithmetic to directly modify the memory address, but should it ever start pointing to somewhere that you didn't intend, bad things will occur.

In .NET, a reference is also a variable that holds a memory address, but that memory address is not directly accessible from your code and can't be fiddled with. Because .NET code is type safe and garbage collected, the CLR always knows what type of data is at a given memory location.

The three main parts of interop (from a memory management perspective) come down to handling the fact that data can move in .NET, dealing with different ways of representing common data types in .NET and native code (like strings), and accounting for memory management by reference counting in some native code (particularly COM).

2. Know what the .NET marshaler does

When data needs to go from managed code to unmanaged code, or vice-versa, the CLR's marshaling service takes over and makes sure that the data is correctly transferred. Normally, the marshaler makes a copy of the data that it is sending across the .NET – native divide. If you are operating on very large amounts of data, this can quickly become memory intensive. That said, for some data types, where the circumstances are appropriate and no conversion is required (this includes .NET Strings to Unicode native strings, even though there is a small formatting difference), the marshaler will instead pin the data (telling the GC not to move it in the event of a compaction) and pass a pointer to the pinned address instead.

The rules around this are fairly technical and easier to look up than memorize, but it's just useful to know that this exists for circumstances in which it might matter.

3. Be aware of the attributes that govern marshaling, and the default type equivalents used in converting between native and managed data types

Many Win32 API function calls already have P/Invoke wrapper code available for them. Should you need to write your own P/Invoke methods (e.g. for a third party library), you'll likely find yourself needing to create .NET versions of native data structures.

When you need to create a data structure, you'll need to apply a System.Runtime.InteropServices.StructLayoutAttribute to it, and specify a LayoutKind of either LayoutKind.Sequential or (more rarely) LayoutKind.Explicit. The CLR controls how data members in classes and structs are stored in memory, and will reorder them for more efficiency. By using a StructLayoutAttribute with one of those two layout kinds, you inform the CLR that the order you specified is important.

In COM interfaces, you can use Interface Definition Language attributes to specify various conditions about what the interface methods are expecting from each of the parameters they receive. These exist to, amongst other things, tell the marshaler to ignore its default behavior and handle marshaling function parameters based on which attributes you've applied (where that kind of fine-grained is needed).

4. Know how the CLR makes it possible for COM and .NET to work with each other

In Visual Studio, you can include COM components in your project's references, just like you would include .NET assemblies. If you need finer control over the resulting go-between assembly, you can use tlbimp.exe to control how a COM type library should be converted. And if there are a few small bits of the conversion that didn't work quite right and you're feeling a bit brave, you can decompile the resulting assembly that tlbimp.exe creates, make changes to it, and recompile it, all with ilasm.exe.

Working with raw CIL is perhaps a scary concept, but using a tool like .NET Reflector can make those changes much easier to apply, since you can see the decompiled code in your language of choice and find the area that needs to be changed much more quickly.

Whether you choose Visual Studio or tlbimp.exe, both create an assembly that contains something called a Runtime Callable Wrapper (RCW), which looks like an ordinary .NET class, but actually serves as an intermediary between your .NET code and the COM component you want to use. If, for some reason, the COM component needs to call one of your .NET methods, then the runtime can also create a COM Callable Wrapper (CCW) to pass to the COM object. No matter which wrapper is being used, the CLR tends to handle the memory aspects just fine.

5. Understand finalizer, IDisposable, and 'using' statements

All .NET objects have a Finalize method. The default finalizer does nothing but you can override it and make it do something more useful. While, the GC has no idea about any unmanaged resources you might be working with, creating a finalizer provides you with a way to handle those resources.

But don't go writing finalizers for every class. The GC has a special mechanism for keeping track of objects which have a custom finalizer and, once they're subject to collection, they get added to a finalizer queue keeps these objects alive for much longer than expected because the CLR itself keeps them alive. Write a finalizer if you need one, but if you can redesign your code to avoid needing one, consider doing that instead.

There's also a special interface in .NET called System.IDisposable which you should know about. When a class implements it, it normally signals that the class has unmanaged resources that need releasing in a deterministic manner. An easy example is a file stream: when you ask .NET to open a file for you, it calls into the operating system and tries to open the file, but let's imagine that there's some problem, such that your code throws an exception. Your application has exception handling code, and so proceeds on its way. But you never closed that file, so as far as the OS is concerned, you still have it open.

You now try to examine the file in another program, but when that program asks the OS to open the file, it refuses! It's possible that when the GC runs it will take care of this, but who knows when that will be? Calling GC.Collect all the time is a bad idea (in terms of performance) and is not guaranteed to fix your problem anyway. This is where IDisposable comes to the rescue: the FileStream class implements IDisposable, which requires it to have a Dispose method, which is used to close these sorts of resources.


Interop is a fascinating world. .NET itself is, in many ways, one giant mass of interop. Whenever you use anything that implements IDisposable, there's a good chance that there's some interop going on behind the scenes. Of course, if you're working with .NET, analysis tools like ANTS Memory Profiler can make your life a lot easier, whereas there are fewer shortcuts available if you're working with unmanaged code or interop. That said, with Windows 8 and WinRT looming in the distance, interop is going to become both more important and much easier.