.NET Security Part 1

Ever since the first version of .NET, it’s been possible to strictly define the actions and resources a particular assembly can use, and, using Code Access Security, permissions to perform certain actions or access certain resources can be defined and modified in code. In .NET 4, the system was completely overhauled. Today, I’ll be starting a look at what the security model is in .NET 4, how you use it, and what you can do with it.

Partial and full-trust assemblies

Most developers aren’t affected by the .NET 4 security model. This is because it only affects assemblies loaded as partial-trust. All assemblies loaded directly on a desktop system, either as part of a desktop application, or a web application on a full-trust appdomain (the default), or loaded as UNSAFE into SQL Server, run as full trust. This means they have full access to the system, and can do whatever they want with no restrictions.

But when an assembly is loaded into a partial-trust appdomain, the actions and resources it can access are limited to the permissions that are granted to it. For example, partially-trusted code can only read a file on disk if it’s have explicitly been given FileIOPermission to read the file. And code can only access the registry if it’s been given RegistryPermission to do so (all the available permissions that can be granted and denied from partial-trust assemblies inherit from System.Security.CodeAccessPermission). This is to limit what untrusted assemblies can do, and stop assemblies containing potentially damaging code from accessing the system and doing something it shouldn’t (say, format the disk).

When a certain permission is required to perform an action, the entire call stack leading up to the call performing that action (for example, File.OpenRead or Registry.OpenSubKey) is checked for that permission. If there is any method on the call stack that is running as partial-trust, and has not explicitly been granted the required permission, then the call fails with a SecurityException, even if the partially-trusted method that failed the permission check is many stackframes down.

This ensures that there is no way for partially-trusted code to get round permissions that haven’t been granted to it by delegating to or exploiting something else that does have the permission. If the partial-trust code is running, then it is on the call stack. And if it’s on the call stack, then it cannot directly or indirectly perform actions that it hasn’t been given permissions for.

But this is a problem – there are legitimate situations in which partially-trusted code needs to perform security-critical actions, even if it hasn’t been given permissions to do so directly. For example, a desktop application (running as full trust) loads an addin into a partial trust appdomain. The desktop app provides an API to the addin to update values in a configuration file on disk, but doesn’t grant general read-write access to the config file. However, when the addin tries to update the config file through the API provided by the full-trust application, such updates will always fail with a SecurityException, because the partial-trust code in the addin doesn’t have permissions to write to the filesystem, even though it is the full-trust code that is actually writing to the config file.

So, there needs to be a way for full-trust code to override the permission check in a secure way such that it can perform security-critical actions on behalf of partial-trust code, once it has verified the partial-trust code is not misbehaving. There are two features that allow this – permission asserts, and security transparency.

Permission demands and asserts

To start a stack walk to check for a certain permission, you simply demand it, either using a method call:

or using an attribute, which demands the permission when the method is called. This is functionally identical to calling Demand() as the first statement in the method:

(Note that demanding a FileIOPermission directly isn’t normally required, as the BCL methods that access the filesystem all demand the appropriate permissions themselves).

Then, to stop a stack walk for a permission from checking past the current stack frame and hitting partially-trusted code, you assert the same permission before calling the method that demands it. Again, you can do this in code:

or using an attribute:

After a permission has been asserted, any security check for that permission triggered by method calls after that point in the same method stops at that stack frame.

For this to be effective, there has to be a way to stop partially trusted code from simply asserting whatever permissions it wants, but still being able to call trusted code that can assert those permissions. This is where security transparency comes in.

Security transparency

There are three security levels code runs under – Transparent, SafeCritical, and Critical. Each imposes restrictions on what the code can do and what it can call, independant of any code access permissions applied to it:

This is the security level all partially-trusted code runs under. There are several restrictions imposed on transparent code; in particular, transparent code cannot do the following:

  • Call Critical-security code (but it can call SafeCritical code).
  • Assert additional permissions.
  • Contain unsafe or unverifiable code.
  • Call P/Invoke methods.
  • Override or inherit Critical types and methods.
This is the ‘broker’ between Transparent and Critical-security code. Transparent code cannot call Critical code directly, but it can call SafeCritical code, which in turn can call Critical code. SafeCritical code verifies the caller isn’t trying to do something it shouldn’t, then either performs the action itself or passes it to a Critical method. There is no restriction on what SafeCritical code can do.
This is the level all fully-trusted code runs under. There is no restriction on what Critical code can do.

This diagram shows the relationships between the different levels, and what each level can call (green represents allowed method calls between levels, red disallowed calls): ee677170.dai_figure3(en-us,MSDN.10).png

Putting it together

Permission asserts and security transparency allow a partially-trusted assembly (running transparent-security code) to call an API in a fully-trusted assembly (running Critical and SafeCritical code) to perform actions that the partially-trusted assembly doesn’t itself have permissions for:

Next time, we’ll look at how you create a partially-trusted appdomain in your own code, and how you can run fully-trusted code in a partially-trusted appdomain.