Red Gate Software might call it “relentless testing” but David Atkinson, the company’s test manager, just calls it doing a job.
“Relentless testing doesn’t describe any kind of testing we actually do,” he says. “We do regression testing and many other types of testing, but nothing called relentless testing.”
It seems perfectly natural that Atkinson would want a precise job definition. After all, his job is to ensure the accuracy and efficiency of Red Gate software. In many ways, he and his fellow testers are proxies for the customer, ensuring as much as possible that the software works the way it is intended before it reaches the market, and striving for continuous improvement throughout the product lifecycle.
Atkinson, 32, joined Red Gate in 2005. Before that he spent five years as a testing engineer and manager at an on-demand access software company with more than a billion in revenue and 22 offices worldwide. He started his career as a quality assurance engineer with a small software development group. Atkinson has degrees in computer science from Cambridge University and chemistry with European studies from Sussex University.
In a recent conversation, Atkinson provided an insider’s view into Red Gate’s testing approach.
BC: You’ve been in large companies and small, and seen a range of testing environments. Is there anything unique about testing at Red Gate?
DA: We have a good tester-to-developer ratio, with two testers for every three developers. The usual ratio is about one tester for every three or four developers. We work in small teams, and ensure that new testers for a particular product are teamed with long-serving testers.
We run automated regression tests overnight so we know as early as possible whether any recent changes have broken the build. The automated tests are faster than manual ones, and we don’t have to wait until the next test phase before finding out if there are problems with a new build.
A luxury we have at Red Gate is products with a clean architecture. Testability is built into products, either by using public APIs or including test APIs. Testing at the API level puts us closer to where problems occur, and helps ensure that problems aren’t masked by the UI.
Finally, there is the thoroughness of our testing. We even test the product documentation, walking through the steps as if we are novices and looking at screen shots to see if they differ from what is referenced in the text. Developers are not as good at picking up these kinds of details.
BC: What automation tools do you use?
DA: Most of our API automation is written in C# using NUnit, a unit-testing framework for .NET languages. Unit testing frameworks such as NUnit, which ironically was conceived as a developer testing aid, are slowly becoming the holy grail of functional test automation due to simplicity and superior maintainability. They can’t be implemented, of course, unless there is testability built into the application under test.
We automate to ensure reliable and broad regression testing. Automating testing doesn’t often find new bugs, although the process of creating the automated tests always does.
There are always things that cannot be automated. UI testing, for example, is hard to automate effectively, so we run those tests manually.
BC: How do you find the best people for the testing job?
DA: We do multiple screenings and testing, first by human resources, then by phone, then by face-to-face interviews in which we use practical tests to assess candidates’ testing aptitude. We have written an application intentionally riddled with functional bugs and aesthetic and usability issues. The applicants get 15 minutes to find as many of the issues as possible. They also have to describe the bugs, which can be revealing.
We want versatility – the ability to move from product to product, which often requires getting outside of a person’s comfort zone. It’s hard for testers to find problems in a product they are acquainted with. If we move them on to a new product, assuming they cope with the learning curve, they will identify new issues that might have been ignored by the previous tester, who could have become numb to many of the product’s quirks.
BC: How do you find people who want to be testers?
DA: We try to tap in to the pool of computer science graduates, who all have aspirations to become developers. Some will be suited for testing roles, but many will try to get testing jobs as a stepping stone to a career as a developer. We don’t want people who aspire to be developers; we want people who see their future in testing.
Once people get into testing, they tend to find it challenging and rewarding.
BC: What do you see as the rewards?
DA: At Red Gate, it is being part of the process of building high-quality software that hundreds of thousands of professionals use. Due to the relatively short project cycles, we get to be involved in two or three product releases a year, which is unique – there’s nothing worse than working on something for months or years and then it gets canned. That doesn’t happen here. We usually see the fruits of our labor quite quickly – within three or four months.
Unlike some companies where only developers are celebrated, we receive credit for what we do here. There’s a nice transparency about the company: In our work area, there is a large display of updated sales figures and download statistics. I don’t know why other companies don’t share that information. It’s nice to see that we are directly contributing to the company’s growth.
BC: What attributes make a good tester?
DA: Attention to detail, thoroughness, tenacity, and the ability to communicate and work in a team structure. We look for ambition – people who are willing to push themselves and continuously progress. They should be willing to adapt to the latest testing ideas. API-level testing with frameworks such as NUnit, for example, barely existed five years ago.
BC: How do you get along with developers? It would seem to be a naturally adversarial relationship, since you are basically finding their mistakes.
DA: There aren’t too many issues – most of the time developers appreciate our work to make software better. We don’t have a problem here with testers being scapegoats for software problems. You need to be diplomatic and polite, and know who can take what.
BC: Can you summarize the importance of good testing?
DA: We develop products that manage crucial data for our customers. You can’t cut corners when it comes to these kinds of products. The ability of Red Gate products to save time and simplify processes is closely linked to the customer’s business efficiency and the quality of work life for their DBAs and developers. We always keep that in mind, no matter how tedious it gets, or what kind of pressure we are receiving to get the product out the door.
For more on testing, see:
“Driving Up Software Quality – The Role of the Tester,” by Helen Joyce, software tester, from the Simple-Talk web site.
“Beyond Sensible,” a joint interview with David Atkinson, test manager, and James Moore, software developer, on the Red Gate web site.