Issues with a detailed test description

In the Exchange Server Archiver team we recently had a series of discussions among our testers in order to get a common understanding about the form and detail level of a test case. Interestingly, the first issue we hit was the terminology. What is a test suite, test set, test case, test procedure and a test step? For me, test suite means a set of test cases for one particular feature of the software. Test set is a set of suites, testing multiple features. Test procedure equals to test case, which is a sequential description of prerequisites, test steps and tidy up. This also means that I prefer to create test cases and steps at a certain amount of detail. Certain amount means all possible actions are described with their expected behaviour.

  This unfortunately raises a couple of questions. Is it really worth to put such an effort describing a particular feature, when it is still under development? It is highly likely, that components will change before release, and these descriptive cases have to be rewritten every time they do. If the tester who creates the suite forgets to mention a couple of combinations to test, will these ever be added? Wouldn’t a detailed description lead to thoughtless sequential execution that essentially means bugs will be overlooked by the bored tester? Wouldn’t exploratory testing be simply better?

  In my opinion, these questions are actually addressing several problems of other processes in the development chain, and not really the testing itself. Lack of requirement or feature documentation can lead to rewriting the test over and over again, since the developers themselves often don’t have a clear idea about how a feature will look like. It sort of makes itself on the go – at which point it’s still not documented, the knowledge base stays in “heads”. Testers are not involved in discussions about changes and they end up receiving something completely new instead of minor changes step by step.

  Lack of test suite reviews can easily lead to have only one point of view, combinations and steps can be overlooked and might not be added later. However, asking an other tester to run through the cases briefly and give an opinion can be a huge help.

  It is unfortunately possible, that the tester who runs the cases will get bored and won’t pay attention to other bugs that might occur during the testing – only the described steps will be run, and nothing else. In this case, exploratory testing is probably better which has the problem that it is hard to repeat the same tests and it’s hard to be certain that a feature has been thoroughly tested. Why not combine the two then? Do an exploratory test first, play with the component, prove that it does what it should do and doesn’t do what it shouldn’t, based on some raw, high scale cases that were written according to the feature requirements. Once done, write a step by step instruction set about what to check and how, concentrating around issues that were found. I found that this approach was indeed very successful.

  The configuration wizard got redesigned recently, and I got the task to rewrite the existing cases for it – because I was the one who did it originally. I looked at the old cases and the new designs, and decided that I’ll just have to dump three days of work, because the old descriptions are completely useless. I’ve started to test it page by page, and when I run out of evil ideas, I created a case, based on what I’ve done. When I thought it was ready, I asked a fellow tester to review it. He gave me a bunch of other ideas to try, and shortly after that the new test suite was ready (it became twice as much after the review). It took me a significant amount of time, to write everything down, but this way it can be reviewed and reproduced at will. I can only hope that I don’t have to rewrite it again due to some design change.