One Monday lunchtime not so long ago we ran a bug hunt for SQL Backup 5. During the hour and a half session 12 participants (all willing Red Gate employees) paired up to try and break the application with the promise of wine and chocolates for the ‘most interesting bug found’. This was by far the most fun testing activity we have ran during the development of this next major release and took less then a day to organise. It also proved to be very useful with some ‘quirky’ issues being raised which would have gone unnoticed using the regression test scripts and the other formal test practices.
“Working in teams of two, one participant usually drives the application and the other sits back and thinks about the big picture. The driver is in charge or pressing keys and navigating the application, and the back-seat driver is in charge of paying attention to the application as a whole and making recommendations about what to try. Often the back-seat driver is the one who notices when a bug occurs.”
Although we adapted the process (we didn’t fancy giving the teams a bell to ring when they found a bug, we let them just holler) we took a lot of advice from James A Whittaker’s on how to run the event. The intention remained as described “The purpose of a hunt is not only to shake some good bugs out of a new build but also to foster teamwork and friendly, healthy competition among your testers”
We had 6 teams who spend approximately 45 mins to ‘play’ with the application and explore the features. For the next 45 mins we gave each pair a different feature in which to focus the testing, for example, Back Up dialog, Restore dialog, Reporting or the Log shipping dialog. Several teams came across known issues which we hadn’t had chance to fix yet, but this was useful to make sure we have the priority of the known bug correctly logged. Following the bug hunt I did find myself raising the priority of some known issues after seeing the context in which the bug hunters came across them. It’s a tough call though deciding when to run the bug hunt as if we left it until we’d fixed all the known issues we’d be close to release and then not in a position to make any potentially big design changes. We decided not to give a list of known issues as we didn’t want this to influence the testing.
I was pretty nervous, watching and listening to colleagues from different departments (developers, support engineers, sales and marketing) use an application which I have been closely involved with. It’s been my (working) life for a number of months now and it was a matter of pride that it didn’t fall over at the first sign of being ‘stressed’. It is amazingly revealing watching people first hand learn how to navigate around, and hearing their comments when they are stuck. Terminology we had taken for granted as being crystal clear was actually sometimes ambiguous. There were lots of field validation tests occurring with thousands of characters being pasted into text boxes and plenty of attention on how many clicks it takes to perform a task. With such attention to detail it didn’t take long for the feedback forms to be filled out. All extremely valuable information (assuming you can read the hand writing).
Given that the whole event took less than a day to organise and the only cost was the wine and chocolates (other than the time spent by the participants), the list of ideas for new features and the new bugs found made it a rewarding way to improve the quality and check the application is still on track for release. The bug hunters all got chance to get up to speed with the product too, so it also served as the first round of product training. I’d recommend it as part of any software development project and like to hear any useful tips from anyone who’s run a bug hunt before.