Metric-Driven Refactoring with .NET Reflector

Software Metrics are a great way of suggesting those parts of your code that may need refactoring, if they are treated purely as an aid to judgement. Nick Harrison shows how metrics can be done simply with NET Reflector and Peli's CodeMetrics Addin, and explains some details of the Smelly Metrics that are often used.

The Bane of Software Metrics / the Joy of Software Metrics

The very mention of software metrics strikes fear into the heart of most developers. Without a doubt, they have been used for evil, but fear not, software metrics can be used for good as well. We will try to shed some light on how metrics can actually be useful. To be useful, a metric must meet three criteria. It must

  • be easily understood
  • be easily calculated
  • provide useful data

In general, any metric must be repeatable and it must produce quantitative values if it is to provide useful data. If a metric is run on some code, and it produces a value of 10, it should mean the same thing every time it produces a value of 10. We should be able to track that our code is improving by rerunning the metrics.

A metric must be valid. What, for example, does the common metric, ‘lines of code’, tell us? Are more lines of code better or fewer lines of code? And what constitutes a line anyway? This favored metric is easily calculated, but not very well understood and does not, by itself, give us significant information.

‘Smelly Metrics’

The refactoring literature uses ‘code smells’ as guidelines for identifying questionable code and suggesting remedies. A smell implies a coding problem like “long method” and has associated refactoring techniques such as “extract method” to target that smell.

A ‘Long method’ is an excellent example. The smell makes reference to a nebulous concept of ‘long‘. We can refine this code smell with a quantifiable metric of ‘number of instructions’. We will use ‘Number of Instructions’ instead of ‘lines of code’ for several reasons. We can avoid the nuances of style. We can have a metric that can apply to all DotNet languages, and we can take advantage of a wonderful tool and in doing so we can sidestep the whole question of what is a line.

We don’t have to pour through the entire code base looking for “long” methods. We can use a metric tool to identify the methods that have more than 100 IL instructions, or whatever guideline seems reasonable for your project.

Many smells and refactors attempt to identify and target complex code. Complex code is more error prone. Complex code is more difficult to debug and maintain. But this concept of “complex” is difficult to define! Or is it …?

It turns out that it may not be so difficult to define after all. ‘Cyclomatic Complexity’ is fairly easily understood, has excellent tool support, and produces useful data. The metric of ‘Cyclomatic Complexity’ tracks the number of logical paths through a block of code. Among other things, this tells us the number of test cases needed to test the block of code.

Regardless of the code being measured, a value of ten means that ten separate test cases are needed to test the code thoroughly. Once we’ve understood this, we know that our goal needs to be to lower the values for ‘Cyclomatic Complexity’ in our code. Methods with complexity values of less than ten will be easier to debug, less likely to contain errors, and be more easily maintained.

We can run a metrics tool against our code base and quickly identify methods with more than 200 IL Instructions or a complexity greater than ten. These methods will be the most in need of refactoring. Depending on the state of you code, you may need to adjust these benchmarks.

Because they help us identify smelly code that is in need of refactoring, ‘Number of Instructions’ and ‘Cyclomatic Complexity’ are our ‘smelly metrics’.

Enter .NET Reflector

We have talked about the need for a metrics tool. .NET Reflector is such a tool. It is easy to use several useful plug-ins. We are most interested in the Code Metrics plug-in by Jonathan ‘Peli’ de Halleux, Simple-Talk’s Geek of the Week. Among the various metrics provided, our friends ‘Number of Instructions’ and ‘Cyclomatic Complexity’ are built in. We simply select the assemblies that we are interested in, run the metrics, and review the results.

Add the Add-in

Start by downloading the Add-in. You can find it here. This will download a zip file. Extract the contents of this zip file to a known location. Once you have these files extracted, select Add-ins from the View menu. You add the Add-in by selecting the Reflector.CodeMetrics.dll that you just extracted.


Once this Add-in is loaded, you will have a new menu item on the tools menu for Code Metrics. This can also be reached by pressing Ctrl E.

When you run Code Metrics, you will see the Code Metric window which will list out the assemblies that you have loaded. Select the assemblies that you are interested and then Click the Start Analysis button


Once the analysis is complete, we are ready to interpret the results. We are interested in the “Method Metrics”


We can sort by any of the columns and quickly identify the code of interest. Note that the values for the metrics do not always line up. It is possible for a method to have a high complexity value and a low number of instructions. It is also possible for a method to have a low complexity value and a relatively high number of instructions. If a method breaches either threshold, it is likely to be one we’d wish to refactor.

Once we have identified methods that breach the thresholds, .NET Reflector will allow you to easily navigate to the disassembled source code and confirm whether the indentified code needs refactoring. This is not to say that all code that breaches the thresholds is bad, but such code does need to be reviewed and carefully monitored.

Common Refactors

Our ‘smelly metrics’ make it easy to identify problematic code, but what do we do about it? The refactoring literature provides useful guidance for pinpointing the smell of bad code and identifying relevant refactors.

Many refactors focus on reducing complexity of code and clarifying its structure. We are most interested in these refactors:

Extract Method

“Extract Method” is perhaps the easiest refactor to understand and implement. It is also the quickest way to reduce the “number of instructions” in a method, but we can’t just start randomly extracting code into new methods.

One problem with long methods is that they are not focused. “Extract Method” gives us an opportunity to bring focus to these methods. When we identify code in a method that is not directly related to the main purpose of the method, this is a good choice for code to be extracted. This helps bring focus to the original method and may eliminate unnecessary side effects. For example, a method called “ValidatePassword” that includes logic to disable the login account would not be very focused and disabling the account would be a side effect. By extracting the code to disable the account to a new method called “DisableLogin”, the method will be focused and we eliminate the side effect.

Often, we will comment a block of code to help explain what we are doing. If a block of code needs a comment to explain what it is doing, then that block of code maybe a good candidate to be extracted and let the comment form the name for the new method. For example, a method called “ValidatePassword” may include a sizable block of code to validate the password’s age. Instead of surrounding that code block with comments that this is the code to validate the password’s age, extract it to a new method called “ValidatePasswordAge”. This new method will now be a central repository for all password age validation logic, and keep the original and new methods more focused. If you are interested in the age validation logic, you won’t have to wade through all the other validation logic. If you are interested in the password validation logic as a whole, you don’t have to get bogged down in the minutia of the age validation logic.

Redundant code is also a good candidate for code to be extracted. Sometimes this is easy to spot. Sometimes it is not. If you have a block of code that needs to be run in a loop and then run again outside the loop, this is easy to spot. If you have a block of code that is essentially the same except for a few variables changing, this is also a good candidate for code to be extracted, but not so obvious. These various variables simply become parameters to the method.

Replace Method with Method Object

“Replace method with method object” is similar to “Extract Method” only in a more extreme case. Sometimes a long complicated method becomes too difficult to extract methods from. Sometimes the original method is a tangled mess of local variables reused and difficult to tease apart in a meaningful way. “Replace Method with Method Object” gives us a strategy for dealing with such methods. This will usually not be the final state for the refactor, but it does provide an excellent strategy for getting to a final state.

We start by moving the code for a problematic method to a new class. The variables in the original method become member variables in the new class. The original method starts out as a public static method in the new class. By making the local variables global to the new class, we can more easily extract methods without having to tease apart the variable usage. Each extracted method becomes a private static method in the new class. As we reduce the complexity, we are left with more methods, but each one is more easily understood. The meaning and usage of the variables become easier to follow. As the meaning of the variables comes clearer, we can eliminate the global variables. Once we eliminate the global variables, we may be able to move the assorted methods back into the original class.

Decompose Conditional

“Decompose Conditional” steps you through the process for making conditional branching easier to follow. Instead of having an “if” statement with several “and”s and “or”s, we can have an “if” statement conditional that calls a function returning a Boolean. The complexity of the original function is reduced. The new method has to only worry about evaluating the conditional, so its complexity is minimal, and the original function is much easier to follow.

Other Metrics

There are some additional metrics in the “Module Metrics” section that are worth mentioning. These deal at a much higher level with how the assembly as a whole is structured

Afferent Coupling

Afferent literally means “carrying towards”. Afferent Coupling measures how many external classes are dependent on classes in the module. Zero implies that external classes do not use the assembly’s classes. The closer an assembly is to zero, the more easily changes can be made without impacting other aspects of the system. The larger the number, the more dependent other assemblies are on this assembly, and that changes may well impact other parts of the system.

Efferent Coupling

Efferent literally means “carrying away from”. Efferent Coupling measures how many external classes a given assembly depends on. Zero implies that the classes in the assembly have no external dependencies. The larger the number, the more dependent this assembly is on the rest of the system, and that this is assembly is easily affected by changes in other part of the system.


Instability is defined in terms of the two forms of coupling. Instability is the value of Efferent Coupling divided by the sum of Efferent and Afferent Coupling. As this value approaches zero, we have more incoming dependencies than outgoing dependencies. This implies an assembly that would be difficult to change. As this value approaches 1, we have more outgoing dependencies than incoming dependencies. This implies an assembly with few things dependent on it. Such an assembly should be easily changed.

Our goal here is to have values close to either extreme depending on the nature of the assembly. Presentation assemblies should approach 1 and data access assemblies should approach 0. We will never reach either extreme, but our goal is to approach them.


“Abstractness” is a ratio of Concrete classes to abstract classes. Abstractness is the value of dividing the number of abstract classes by the number of concrete classes. Here abstract classes includes abstract classes as well as interfaces. Abstractness provides a measurement for flexibility. A value approaching 1 implies a higher degree of abstraction and implies more flexibility. A value approaching 0 implies a lower degree of abstraction and implies less flexibility.


We have explored two metrics that are easily calculated and interpreted using .NET Reflector. We have seen how these metrics can quickly identify potentially problematic code.

If you strive to keep your methods below 200 IL Instructions and with a complexity below ten, your code will be easily followed and more easily maintained.

We have also explored four module level metrics that help reveal how well our assemblies interact with each other. The guidelines developed there will help you effectively structure your solutions into projects to ease maintenance requirements.Here is where you can get more information

More information