Behaviour Driven Development Part 3 – Benefits of using BDD Programming Methodology

Behaviour Driven Development requires the use of ubiquitous language to clearly state the requirements for a software project. As Sunil Pardasani explains, this clarity can bring substantial benefits to your organisation, reducing misunderstandings, competing visions, and setting clear goals that are directly tied to the development process.

In the first article of the series, we introduced the Behaviour Driven Development and showed how ubiquitous language is used during requirements analysis. We contrasted it with Test Driven Development and then briefly touched on some benefits of using ubiquitous language. In the second article, we showed a practical code example to demonstrate how to use ubiquitous language scenarios to generate and carry out tests in C#, using the Specflow tool. The purpose of this article is to show how using the BDD programming methodology can make the entire development process more efficient, less time consuming and enable better resource utilization.

Let us explore the key benefits of using the BDD approach in our Coding Club System project case study.

Granularity of requirements

A big benefit of BDD is the fact that it encourages and helps with the creation of a comprehensive set of fine-grained requirements and that it creates a lot of discussion about the specific requirements from the very beginning. On his website, Dan North described BDD as ‘a way to describe the requirement such that everyone – the business folks, the analyst, the developer and the tester – have a common understanding of the scope of the work. From this they can agree a common definiton [sic] of “done”, and we escape the dual gumption traps of “that’ s not what I asked for” or “I forgot to tell you about this other thing”.

In my experience, the absence of clarity in the requirements may lead to a lot of confusion amongst the stakeholders. The client may not be able to express very well the functionality they want in the software application. The programmers will make assumptions about the requirements. The test analysts will make even more assumptions when writing the test scripts, thereby leading to more misunderstandings during User Acceptance Testing (UAT).

I will explain what I mean by using an example. Say, in our Coding Club system project, we have a requirement that allows the administrator of the system to modify the existing profile of a member to update information. Just looking at that requirement at face value leaves a lot of questions unanswered. For example:

  • How should the administrator be able to get to a stage where they can edit profile information? Would that be using an Edit button? How would they get to it?
  • Would there be a need to have an Edit button next to each field? Or would all fields be editable once the main Edit button is clicked?
  • Once the information is updated, how is the user notified? Would it be through an automated email? What event will cause the email to be sent?

As we can see, it would be better to discuss the above questions during the requirements gathering process rather than during or after the development process.

If, instead, we have made our requirements more granular using Dan North’ s template (which we talked about in the first article of this series), the requirements would be agreed on right from the beginning.

These kinds of granular requirements would be written after a lot of discussions between the various stakeholders, and would become the acceptance criteria for the functional requirements of the system and as Dan North said – when it comes to the functionality in the software application that is going to be developed, the stakeholders ‘… can agree a common definiton[sic] of “done”…’

Another BDD principle is that test methods should be sentences. We demonstrated this with an example in the previous article. When the test methods are sentences from the requirements, it keeps the programmer focused on the specific functionality that need to be developed for the specific requirement knowing that this is exactly what was agreed with the client.

Saving time and resources

Development teams who first start using BDD and the ubiquitous language may find that it takes a lot of time to write the requirements in the ubiquitous language and this may be especially time consuming when the requirements are bouncing back and forth between the stakeholders.

I am not convinced by the suggestion that this would be a waste of time and it may be best if the programmers get started on the project. In my opinion, this is not always true, because not having crystal clear requirements may end up costing a lot more time and resource (and sometimes frustration) during the testing and UAT phases, when the end product is not what the client had in mind. Using ubiquitous language almost “forces” all the stakeholders to be on the same page as far as the requirements and the interpretation of the requirements is concerned.

Let us use a couple of illustrations from our Coding Club System case-study to show this. For example, in our Coding Club project, we might have a requirement that once the administrator has updated information on a field, the user should get an email notification. Let us picture a situation where a programmer reads that requirement and programs functionality which is their interpretation of what the requirement means. The programmer writes code so that after the administrator has made the change and clicked the Update button, it sends an email. After this, say while the developer is coding the application, the test analyst looks at the requirement and writes tests assuming there would be a Save button and an Update button. This would cause quite a bit of confusion during UAT and significant changes will have to be made again, often this would mean re-developing key areas of functionality again.

All these late-stage iterations could have been avoided if more time had been invested by the stakeholders at the beginning to discuss and agree on a set of granular requirements and write them using ubiquitous language using Dan North’ s template, which we described in detail in the first article of the series. This would have helped clarify misunderstandings and prevented miscommunication later on in the development process. This is not to say that misunderstandings wouldn’ t still occur, but I feel that following this process will minimize misunderstandings between the stakeholders.

The granular requirements will be used during discussions between developers, business analysts and the client. The requirements will be refined and all stakeholders will have a lot more clarity. For example, discussions about what buttons need to be on the user interface, for instance, would be had right at the beginning and the requirement would stay constant throughout the development lifecycle.

Less costly feature development

Behaviour Driven Development, as stated by Dan North, focuses on the “behaviour” of the system. Since the acceptance criteria is executable (as we showed in the last article), this can save time as there will be less confusion. Even though requirements written in ubiquitous language can be potentially misinterpreted, these misunderstandings will occur a lot less and be less significant compared to using less granular requirements.

Since the functional requirements will stay consistent, this will reduce the amount of rework required, which can reduce the overall cost of a project or allow the additional time recovered to improve quality by performing additional testing. Some examples of this could be:

  • We could carry out more performance testing and test the speed of the application. For example, once the member profile in the Coding Club system is updated, how long does it take for the user to receive an email?
  • There could be more Usability Testing where the testing team could determine how intuitive the application is. For example, when a new administrator logs into the system, is the Edit button intuitive for them i.e. is it obvious to them that that is the button they need to click to edit a profile? Would it be better if hints are added for first time users?
  • The testing team could carry out additional GUI testing, which can be part of exploratory testing and make suggestions to improve the look and feel of the GUI. For example, there could be discussions about the colours of the buttons, the background display, the format of the email that goes out to subscribers (possibility of giving subscribers choice between receiving HTML or text emails?).
  • Additional load testing could be carried out to determine how the system would cope when there are several users trying to make changes.

Let us look again at the requirement we mentioned earlier in this article. The requirement was to allow the administrator of the system to modify the existing profile of a member to update information. From my experience in industry, this is the type of requirement that is very likely misunderstood if the requirements are not granular enough for the functionality.

In the traditional development process, the requirements are gathered. After this, it goes through the design phase and then development, testing and UAT. Let us say the requirements have been gathered and the programmer reads the requirement to update profile of a user and concludes that a Save button should be added to allow the administrator to do so. The code release is carried and the test analyst considers the tests to be successful. When it goes through UAT, the client could turn around and say they wanted three buttons – Save, Preview and Update, with email being triggered after the Update button is clicked. This would then go back to the programmer who will write the code for it and the whole thing will go through the entire cycle again. Once this is complete, the client could once again turn around and say they would like to add a pop-up asking the user to confirm that they want to save the information when the Save button is clicked.

In an article back in 2003, The Economist quoted the then Chief Technology Officer of Klockwork who said – “…a bug which costs $1 to fix on the programmer’s desktop costs $100 to fix once it is incorporated into a complete program, and many thousands of dollars if it is identified only after the software has been deployed in the field.”

As software development progresses through the various phases, the cost of issues increases. This is illustrated in the graph below:

2263-6e29a3b6-db9d-42ae-b8c9-61a5c6474ad

(Image Courtesy: http://istqbexamcertification.com/what-is-the-cost-of-defects-in-software-testing/)

The graph above indicates that every time the development process moves to a new phase, the cost increases approximately ten times. For example, if a programmer develops an application and it goes into the testing phase. The tester finds that it does not meet the requirements (even if the programmer used Test Driven Development, only the programmer could see the tests, not the test analyst). The project goes back into development and a $1000 could turn into $10,000 project. This is just assuming it happens once. Furthermore, note that the by the time the project is in ‘Test’ or ‘Live use’ , the cost of fixing a problem increases exponentially.

Not only does the project end up costing more, but in my real-world experience, this also affects the morale of the team and leads to individuals getting frustrated and demotivated.

It could be argued that even though requirements are a lot more clear in BDD, using the methodology will not always prevent these situations. BDD is not perfect and using this methodology can still result in requirements that are unclear or confusing. However, my opinion, if BDD is followed, it minimizes the occurrences of these types of situations. Here is what I mean. If we use granular requirements and all stakeholders are on the same page and agree on how the final application should behave, then it becomes unlikely that entire key areas of functionality will need to be re-developed because of misunderstandings. Yes, the tester will still find bugs and the developer will need to fix them and there will be multiple code releases. But while re-developing a new functionality because of misunderstanding a requirement can take days, fixing a bug may only take about an hour.

Summary

The goal of this series was to give an overview of Behaviour Driven Development. We introduced BDD and ubiquitous language, walked through a practical example using Specflow and then summarized the benefits of using this approach in this article. Readers who wish to learn more are encouraged to check out articles on Dan North’ s website (http://dannorth.net) and read the book Domain Driven Design by Eric Evans. The book goes into a lot of depth about the issues we described in situations where requirements aren’ t clear. The author calls it the “linguistic divide” between the various stakeholders, which in my opinion is a very accurate description.

References

http://dannorth.net/introducing-bdd/

http://dannorth.net/whats-in-a-story/

http://istqbexamcertification.com/what-is-the-cost-of-defects-in-software-testing/

http://istqbexamcertification.com/what-is-defect-or-bugs-or-faults-in-software-testing/

http://www.lkpgroup.com/Cost%20of%20Software%20Defects.pdf

http://www.economist.com/node/1841081