As discussed in Part 1, by segregating our focus on developing code that behaves as expected from the focus on developing code that will be well structured, we have achieved the ability to eliminate wasteful and error prone manual chores. That by itself is a really big win, and I often use that aspect of TDD as a “I rest my case” argument in support of embracing TDD.
But there is much more goodness coming our way as we continue mastering the TDD process. Let’s first look into the difficulties we’re facing when attempting to modify the code.
Code as text
The software development profession evolved from the arcane discipline of wrangling obtuse binary code (0s and 1s), then to assembly code (higher level of abstraction but still very close to the metal), and then all the way to the abstract syntax tree that is being manipulated by the source code.
This evolution occurred under the pressure of having to increase the number of teams who could effectively develop reliable software. Humans need to undergo gruesome training if they are to understand obtuse binary or assembly code. However, if we were to propose that the code be written using natural language (as in English language), suddenly the number of people who could join the profession explodes.
Today, we use programming languages that we write as plain text. Of course, computers do not understand natural languages, same as they do not possess common sense. Because of that, the source code developed by software developers who are using text must be translated into a set of instructions that computers can execute. And those instructions are delivered in the form of a binary code.
OK, so where’s the problem? Well, the problem is in the unavoidable latency. When we manipulate any material, we do it in real time. If, for example, I’m cutting a piece of wood with the intention to use it to build a body of a guitar, every move I make while cutting provides me with immediate feedback. Am I pushing too hard, is the wood starting to crack, etc. There is no latency while I work. That way, I avoid the risk of making mistakes while I work.
That immediate feedback is sorely missing during the process of developing software. I can make any changes to the source code and the underlying machinery will remain unphased. My changes don’t even reach that machinery – they are still isolated somewhere in the editor’s memory. I will not get any pushback, no feedback, in case I make a bad decision and modify the code in a way that is going to later produce unwanted behaviour.
Well, that’s not good. You see, code-as-text is a problem. Software developers can ostensibly spend hours making changes to the code without receiving even the slightest feedback from the underlying machinery. Because of that, we often see that software development takes much longer than originally anticipated. That happens because teams learn about the unwanted outcomes of their changes much later in the process. The in-between time (the time from the moment teams make changes to the code until the time it becomes obvious that the changes were incorrect) is pure waste. Such waste has very demoralizing effects on software development teams.
We would like to be in the position where a change to the code gets verified by the machine that is supposed to be running the code. So long as the change to the code remains inside the isolated working memory of the source code editor, the underlying machinery remains unaffected.
But how to get there? Typically, we observe developers solicit the feedback from the machinery by doing the heavy lifting chores described in Part 1 (if you’d recall, we discussed 11 discrete steps developers tend to do manually). Besides those steps being wasteful and error prone, they are also too slow to be considered as useful feedback. We must find a way of obtaining that feedback automatically. The speed of feedback truly matters in such situations. When developing software, it is good to hit the stride and get into the flow. And if we rely on doing manual soliciting of the feedback, the flow gets constantly interrupted and the quality of work suffers.
A better way to create that flow is to automate the software development process. As we’ve already seen in Part 1, we already know how to do that – we create an executable expectation (a test). That expectation gets triggered when the code gets changed and the system gives automatic feedback – did the expectation fail, or did the change to the existing code fulfill the expectation?
Given the fact that developers are making changes to the text and that an intermediate layer (a compiler or an interpreter) needs to spend some time transforming the text into binary code, feedback can never be immediate. Nevertheless, the more we can shorten the latency between our action and its outcome when it gets executed by the machines, the better our development flow will get.
Depending on the way developers prefer to set their work environment, there is still a possibility that the time spent from the moment we make change to the code, till the moment that change gets executed by the machines, is quite long. That long, delayed feedback is definitely not good for establishing the flow. The longer we wait from the moment we make the change, ‘til the moment we see if it works as expected, the harder it gets to understand whether we’re going in the right direction or whether we’ve broken something.
Because of the leniency, teams who are practicing TDD quickly learn that it is important to forfeit large batches and to instead focus on small batches. When bidding on a small batch, we increase our chances that the change needed to satisfy the batch expectation will be small too. A small change usually implies a short length of time needed to execute it. A short length of time that elapses from the point when the change was made ‘til the point when the made change is executed indicates quick feedback. Quick feedback in turn establishes smooth, satisfying flow.
The takeaway from this quick analysis is that the best way to shorten the feedback latency is to bid on small batches.
One of the reasons TDD seems sluggish in its adoption rate lies in its obsession with small steps. Traditionally, the software development profession is viewed as an area reserved for people who are well trained in stringent formal thinking and can ‘converse’ with computers by typing reams of code. A prominent indicator of such skillset used to be the ability of those advanced developers to spend long hours churning code without ever stopping to check if their code works or not. Which means large batches tended to be (and largely still are) a hallmark of advanced skills in software development.
TDD turns that criterion on its head. It values small steps (actually, more like tiny steps). The smaller the better. That orientation in turn makes teams who practice TDD look like a bunch of novices. Naturally, that sentiment causes some seasoned software developers to look at TDD with condescension.
One of the most advanced software developers I’ve had the pleasure to follow is Ron Jeffries, one of the three founders of the Extreme Processing (XP) methodology. He famously offered the following advice: “The trick is never to let the code not be working.”
Very significant, and above all, very practical advice. If we are to follow that advice, we wouldn’t be able to find a better way to do it than to follow the TDD process.
We have looked into the advantages of describing the expected behaviour from the calling client perspective. It makes more sense to cater to the client’s needs instead of struggling to design a one-size-fits-all interface. By doing that, we are able to quickly craft a failing test.
The usefulness of the fake-it-till-you-make-it approach is in enabling us to establish quick feedback loops. Instead of leaving the code “not be working” while we go into some kind of a ‘navel gazing’ session while striving to come up with the most perfect implementation of the functionality that will satisfy the expectation, we pre-empt the situation by simply making the tests pass. We do it by crafting a skeleton of the code we wish to produce, that returns a hardcoded value.
That way, we are fighting the innate sluggishness of the system that is based on code-as-text. It should be obvious by now that TDD is a discipline that is strongly leaning toward the bias for action. Rather than equivocating what to do next and how to do it, when practicing TDD we just proceed in small, safe steps and are growing the solution in a steady, unperturbed fashion.
If there is one trait of TDD that I’d say is its strongest suite, I would go for increasing the frequency of feedback. The reasons why I am so bullish about that aspect have already been discussed above; I will nevertheless repeat: code-as-text is a big hurdle to creating complex and correct software, no matter how we look at it. TDD seems to be the only currently available method that addresses this blind spot in the software development discipline.
OK, but what about the code structure?
So far, we have only been looking into the activities that are aimed at making the code behave the way we expect it to. But if you’d recall, we’ve also mentioned the other side of the equation: making the code structure meet our expectations. What are our expectations with regards to the code structure? It’s obvious – we want our code to be well structured. That’s of course easy to say, but what do we mean by ‘well structured’ code?
One school of thought claims that the best yardstick with which to measure the quality of the code structure is the ease of change. How easy is it to change the code? Obviously, according to that criterion, if the code is easy to change, it is of good quality.
I find that yardstick too vague. To begin with, software code is pretty much always super easy to change. That’s why it’s called software (it contains the word ‘soft’). So, I’d like to propose another yardstick with which to measure the quality of code structure: code is well structured if it’s easy to improve it.
Some will object that I’m splitting hairs, but in my defense, I’d like to say that while it’s easy to change the code structure, such change may not always result in the improvement in the code structure. If we were to agree on that new criterion, how do we measure the improvement in the code structure? Code is well structured if it is easy to scan, easy to reason about, easy to upgrade without creating new issues and defects.
The beauty of TDD is that once it helps us create the situation where the code is never left “not be working”, it frees us to completely focus on making sure the code is structured properly. That focus is what we call the refactoring phase.
Some experts would argue that the entire purpose of doing TDD is to enable elegant refactoring. I think it may be a bit of an extreme position, but in some ways, it does make sense. One of the main reasons we strive to make the code well structured is to enable us to continue with our TDD practice. It’s sort of similar to how one of the main reasons for going to the gym and working out is to enable ourselves to continue working out and staying in good shape.
The practice of intense and smooth refactoring deserves another article. Tune in for Part 3 in which we will do some coding exercises with the aim to focus on refactoring which will foster a high degree of testability.