A TDD Journey: 2- Naming Tests; Mocking Frameworks; Dependency Injection

Test-Driven Development (TDD) relies on the repetition of a very short development cycle Starting from an initially failing automated test that defines the functionality that is required, and then producing the minimum amount of code to pass that test, and finally refactoring the new code. Michael Sorens continues his introduction to TDD that is more of a journey in six parts, by implementing the first tests and introducing the topics of Test Naming, Mocking Frameworks and Dependency Injection

In part 1, I gave just enough background material on TDD so you could follow along on the TDD we are about to do. This and subsequent parts in this series will take you right inside my development environment so you can watch and participate as we develop a small component using TDD methods. So let’s get right to it!

The class we want to write is extremely simple but will serve to illustrate quite a number of useful points. To set the stage, say we have agreed on an API that provides a component called a WidgetActivator with a public method called Execute. Execute needs to prepare a widget for activation by loading and publishing its stuff. (We will refine just what “stuff” is later :-).

So where to begin? Refer back to part 1 which suggested thinking about behaviors rather than tests, and introduced Dan North’s list of key questions to use as a guide.

  1. Where to start?
  2. What to test/not to test?
  3. How much to put in one test?
  4. What to name a test?
  5. How to understand why a test fails?

To determine where to start (1) and what to test/not test (2), simply ask yourself:

What’s the next most important thing the system doesn’t do?

This question should yield some behavior “B”. Each behavior should be straightforward; simple. Start there, then repeat until you “run out of value” for the current task.

To determine how much to include in a test (3) and what to call it (4) write the behavior “B” as a sentence “S0”. If the sentence is too long or convoluted split it up into multiple sentences S0, S1, etc. How to tell if it is too long? If, for example, you have an “and” in there, e.g. “Converse should return an output and check a status” then it often-even usually-signifies that you are testing more than one behavior, so split it up. Each resultant sentence Sn names a test, quite literally. The convention I like to use is just to replace spaces in the sentence with underscores and perhaps omit non-essential connecting words for brevity. So if we have the sentence “Build method throws InvalidInputException when input is empty” the corresponding test name is just Build_throws_InvalidInputException_when_input_is_empty.

2019-img2C.gifTest Naming

Now the content of these constructed sentences is not as freeform as it would appear. Every such sentence should follow this pattern:

…where you can think of <behavior> as answering what does it do? and <scenario> as answering under what conditions? Applying that to the sample sentence above:

For comparison, what might be considered the standard (propounded by the respected TDD authority, Roy Osherove, in his book The Art of Unit Testing) is <method><scenario><behavior>. That is more than just a re-ordering, though, as his approach is more a list of bullet points than a sentence. For his example of…

I mentally translate that to…

  • Analyze File
  • FileWith3LinesAndFileProvider
  • ReadsFileUsingProvider

…which I find harder to digest than the smooth flow of a sentence. Yes, you do have a clearer separation between method name, scenario, and behavior, but at the cost of legibility: it is harder to read a sentence with no visual separation between words.


So… as stated at the top of this section we want to create a WidgetActivator. It has an Execute method. And that method needs to delegate to a loader component to load the details of the widget. Our first behavior stated as a single sentence is then Execute delegates to IWidgetLoader to load the widget details. This sentence becomes the name of our first test. Let’s use TDD now to write a failing test then make it pass. I am going to emulate red-green-refactoring as we go. The test code shows up nominally with a red header bar because normally when you write the test code it is supposed to fail.

TEST: Write one line of code in the first test. We are testing a WidgetActivator (per the class name) so create one.

2019-img2C.gifMocking Framework

For our next step, refer back to the test name: the WidgetActivator needs to delegate to an WidgetLoader, or perhaps more precisely stated, to an object that instantiates the WidgetLoader interface.

But in a unit test we want to stay focused on the class under test. We do not want an instance of an WidgetLoader that might go off and talk to a database; we want an instance that we have more fine-grained control over. So we are going to mock the object with moq, a mocking framework. (Of course, you can substitute your own favorite mocking framework if you so choose.)  Though moq is a powerful and flexible mocking framework, it is still quite easy to get started with. Here are some basics; you will see more later as needed.

Moq provides two useful syntaxes. First is Linq-to-Mocks. Yes, Microsoft’s Language Integrated Query, or LINQ, provides a wonderfully convenient mechanism for detailing what a mock needs to do. As a simple example, consider this:

Per the Linq-to-Mocks page I linked to above, the way to read this line is: from the universe of IProvider mocks, use one that returns string when its Handles() method is invoked. This Linq-to-Mocks syntax is quite handy, but not as rich as moq’s traditional imperative syntax. Essentially the same thing can be written imperatively like this:


TEST: We next create the needed mock WidgetLoader. We are going to use moq’s traditional imperative syntax for this one; you will see why shortly.

Notice the yellow highlighting above so you can quickly identify what has changed.

Next we need to supply the WidgetLoader mock to the WidgetActivator . But there are a variety of ways to do this. Should we pass the WidgetLoader to the Execute method? Should we give WidgetActivator a property with a setter method? Should the WidgetActivator constructor accept it? Remember that the test should drive the code construction, so the test itself should guide the design choice. It seems, then, that this test (by its name) is attempting to do too many things at once. Let’s rename it from Execute_delegates_to_IWidgetLoader_to_load_widget_details to  WidgetActivator_constructor_accepts_an_IWidgetLoader.

TEST: Wire the WidgetLoader to the WidgetActivator by Dependency Injection. (Note that we are passing in our mock loader, which is the Object property of mockWidgetLoader.)

2019-img2C.gifDependency Injection

In this step all we did was pass a parameter to an object’s constructor. Simple, yes, but it actually introduces a powerful design pattern: dependency injection. When writing tests first, it is difficult not to use dependency injection; it flows naturally, almost automatically. But if you are not using TDD, and writing your production code first, DI can be neglected or overlooked. What is DI? James Shore in Dependency Injection Demystified states it best:

“Dependency Injection” is a 25-dollar term for a 5-cent concept… The Really Short Version: Dependency injection means giving an object its instance variables. Really. That’s it.

The main benefits of using DI are:

  • Promotes loose coupling and therefore makes the code more maintainable.
  • Makes the application easier to test because, just as you see here, we can provide a mock object instead of a real one in the unit test.
  • Reduces and consolidates boilerplate code.

Let’s take a concrete example. On the left below, without dependency injection, MyWidget creates its own worker object, MyComponent. On the right the main program creates both a MyWidgetand a MyComponent object, then passes that MyComponent object to the MyWidgetobject. There are a few small changes inside the MyWidgetclass to accommodate this. The constructor now takes an argument and simply assigns the argument passed in to its private _MyComponent field. Also, note that _MyComponent is now of type IMyComponent instead of just MyComponent -the “I” prefix on the type name is a common convention that this is an interface. That adds extensibility and lets us, for example, create a test object that also implements IMyComponent and pass that in for testing purposes.

Without DI

With DI

static int Main(string[] args)


  var w = new MyWidget();


public MyWidget


  private MyComponent _myComponent;

  public MyWidget()


   _myComponent = new MyComponent();



static int Main(string[] args)


  var c = new MyComponent();

  var w = new MyWidget(c);


public MyWidget


  private IMyComponent _myComponent;

  public MyWidget(IMyComponent myComponent)


   _myComponent = myComponent;



In our WidgetActivatorTest class above test you saw how we injected a mock WidgetLoader into the WidgetActivator . What you will not see is how the production code injects a real WidgetLoader at runtime. In production code you should not be doing the injection manually as we have done in the test code or the sample above; rather you should use an IoC container. While IoC containers are beyond the scope of this article, I will provide a few useful links for further reading. Matthew Dennis itemizes the popular IoC Containers for the .NET world in Introduction to Munq IOC Container for ASP.NET (a third of the way or so down there is a nice chart showing relative performance of each). Scott Hanselman’s slightly older List of .NET Dependency Injection Containers is informative as well. Finally, Andrey Shchekin has published a detailed feature comparison of 19 different .NET IoC containers.


CODE: That completes the first test method which, as it stands, does not even compile. So we have, as we should, a failing test. Remember that failure can manifest as either failing to compile or, if compilable, then failing to pass. So now we turn to the production code, the class-under-test, to make the test pass.

2019-img2C.gifImplicit Assumptions 

We have now added the appropriate production code to allow the test to compile. Furthermore, if you execute the test you will see it passes. That is darn peculiar given the fact that we have not added any assertions yet! That is, a typical test concludes with an assertion to verify something, like the result was true, or the method returned a non-null object, or the string value was abc. But sometimes, like in this case, that assertion is implicit: as long as we can get to the end of the test method without throwing an exception it is a passing test. In this case, the implicit assertion is specifically that we could successfully create a WidgetActivator that accepts an WidgetLoader.


Now let’s return to our original first test: Execute_delegates_to_IWidgetLoader_to_load_widget_details.

TEST: The test name says we want to call the Execute method on our class. Then we add an explicit assertion to do what the test name asks for: to confirm that we called Load on the mock exactly one time. Here we use moq’s handy Verify method.

CODE: Make the code compile by adding the Execute method to the WidgetActivator class, and the Load method on the WidgetLoader interface.

That gives us a clean compile so the assertion runs… but it fails. Which is to say, the Load method is not being called exactly one time by the Execute method. In fact, it is not being called at all because the Execute method does not do anything yet. So add more code to get it to pass. The lines added here exactly mirror those you saw in the DI introduction earlier.

CODE:  Store the injected dependency in a local field so that we can then use it in subsequent methods.  

With those last bits, we are now invoking the Load method of WidgetLoader so we have completed the build-out of the test and just enough production code to make it pass.

Subsequent articles in this series will continue to interleave building real code with introducing new tools, concepts, and techniques to streamline the process. It is a long road yet ahead but my intent is to make your learning curve a gentle incline rather than a steep precipice, so by the time you finish you will be well-prepared to take the final leap (so to speak 🙂 to applying TDD to your own work!