Who Tests the Tester?

It is scarcely surprising that it can take up to five years to release a new version of SQL Server when one understands the extent of the effort required to test it. When enterprises depend on the reliability of an application or tool such as SQL Backup, the contribution of the tester is of paramount importance. It is an interesting and enjoyable role as well, as Andrew Clarke found out by chatting to testers at Red Gate.

A decade or so ago, it was my experience that developers had a more rewarding role in the application development process than other team members such as testers, DBAs and designers.  Testing, for example, was once seen as a less creative role, with less prestige. However, things seem to have changed over the past few years to the point that testers in general have gained more influence in the teams and are more able to lead the way for how teams design applications. They also seem to have more freedom to learn interesting technologies and methodologies, and gain useful skills.

To find out more I met up with Robin Anderson, a tester on SQL Backup and SQL Response, and Helen Joyce, the Head of Test at Red Gate, to ask them why they chose to become testers, and why they enjoy it so much. I was fascinated to discover whether they had any lingering envy of the developer’s role, or whether they were happy with the career paths that their roles as testers offered them. I was surprised to find out how much they were able to use programming skills, do advanced work with cool technologies such as virtualisation, and play a lead role in the management of development projects.

Most of all, I shall cherish one of the reasons they gave for their state of contentment. They could write code without having to submit it to testers and get bombarded with bug reports. I met Robin Anderson and Helen Joyce over a Dim Sum meal in Cambridge.

Andrew:
How did you come to be a test engineer?
Robin:
I was working as a graduate verification engineer. A verification engineer is the hardware equivalent of a software tester.  The company that I worked for fired all their engineers in a great purge. Someone put an application for Red Gate on my desk and said ‘look, why don’t you try applying here?’ so I had a chat with some of the employees there to find out whether I would like it or not. It sounded good, so I went on and applied. I’d done electronic engineering and computer science, a structured combined four-year degree at Reading University.
Andrew:
So you don’t think of yourself as someone who has missed being a developer?  You actually prefer testing?
Robin:
I realise that specialisation is really what people need to do to become very valuable, but I always try to stay clear of too much specialisation. I really enjoy programming, and I certainly would enjoy being a developer, but I also enjoy most other aspects of computer science and IT, so what I do at Red Gate ideally fits that.  I get involved in the development task when I’m testing, and everything else that testing entails. As well, I generally help out with solving customer issues where they’ve found bugs, so I do a bit of support as well.
Helen:
You do things such as the build systems as well, don’t you?
Robin:
Yes, one day I’ll have to bite the bullet and say ‘This is what I’m going to do, I’ll specialise now’, but until then, I really enjoy the variety of my job.
Helen:
We definitely have a wide range of responsibilities and skills as testers.
Andrew:
Do you think there’s a good career path for testers? One doesn’t often hear developers saying that they have a good career path.
Helen:
We certainly take career-development seriously. At Red Gate, a lot of testers have taken professional testing qualifications at several levels from foundation to practitioner. It is not very theoretical; a lot of the testers at Red Gate actually enjoy those courses and also take the Microsoft qualifications or other developer qualifications.

You acquire a lot of other valuable skills as a tester. To organise the testing of an application, you have to do a great deal of planning. You have to actually stand back and understand the whole project, then look down into the detail in order to plan an effective test strategy. As a result, testers make quite good project managers. They  think about the big picture, but they’re also very technical.  They’re used to understanding the whole architecture of a system, identifying where the risks are going to be and how they have to be managed. It’s easy to see why it would lend itself to sort of the skills of project managing.  When testers find problems, they have to communicate those problems in a technical way that makes it easier for developers to then find the fault.  As a tester, you’ve got to tactfully communicate the reason why you think it’s important to have something fixed; obviously, developers don’t necessarily want to hear that because they want to make progress, and they don’t want to have to go back to code that they’ve signed off as being complete. So testers tend to be less outspoken and learn to be very conscious of teamwork and other people’s point of view. I don’t think it’s a co-incidence that we’ve got so many project managers that have come from a testing background, and it isn’t just that they have got good technical qualifications.

Andrew:
Yes, It’s an easier transition to a project management role for testers because they have had appropriate training and experience for the role, whereas developers don’t get the same opportunities.
Helen:
Yeah, but also we’ve got a lot of keen testers who want to make sure that they’ve got a career path working in a technical role as testers rather than managers.
Andrew:
As a technical career path?
Helen:
Yes, and it is something we want to encourage. We’re organising a conference in October and we’re going to organise some speakers. A few of our team already speak at conferences. We tend to have cutting edge demands in terms of testing, so we’re passing on our experience to other people by explaining all of our case studies.
Andrew:
So you’re doing quite a bit of cutting-edge testing with automated testing, and with introducing randomised testing?
Robin:
Well yes. When you want to stimulate a particular input to a program and then check the output, a lot of the test harnesses tend to be handcrafted. You try one combination with another, but then of course you’re at the mercy of your own assumptions about what sort of input to expect. So we use something equivalent to what the hardware industry calls ‘Constrained Random Stimulus’ in order to take away some of the human element of designing test cases and basically the constraining expectations of the tester. When I do it, I ensure that all the inputs are random, but to make it more realistic, I’ll constrain the variance just a little bit.  So, for example, if you have a number of command-line arguments for a particular program you might randomise the combination of arguments that it will use as well as the values, out of the total population of possible arguments that it’s supplied with and all the possible values. In the hardware industry it’s standard practice to assume that you haven’t really verified a component unless you’ve done this.  With software testing it really does find bugs by doing things you’d never have thought of.
Helen:
Particularly with lots of different commands and arguments, all with a large range of acceptable values.
Robin:
The permutations are often almost infinite, and you can’t test them all which is why you have to use random testing. If you could exhaust them all you wouldn’t use CRS, you would just try the book, but if there are too many combinations to test then that’s when you would use something like that.
Helen:
Robin’s application randomly generates possible permutations and some of these will be quite obscure combinations to use, but they’re entirely possible.
Robin:
The combinations are definitely there which wouldn’t have been obvious for a human.
Andrew:
The other thing I was going to ask about was how virtualisation has helped the test process?  Because when you started there wasn’t much was there? Testers are really pioneering a lot of ways of exploiting virtualisation technology.
Helen:
We’ve definitely got a long way to go to be able to have a series of automated test cases that fire up different environments. We already use a lot of different virtualised test environments which could even be clustered environments on several machines. We can run the tests and then run the entire install of a build, for example, run a set of tests and then open another virtual system.
Robin:
One of the projects I’m working on at the moment is for our monitoring and alerting tool, SQL Response. On a very powerful computer we can spawn off 50 machines and use them as monitored entities for SQL Response.  We can mess them around, completely screw them up and then at the end of the test just wipe them down again and we’ll have a clean install the next day.

I can set up a complete cluster. I’ve set up a few of those for SQL Backup testing.  So I can just say ‘spawn me a cluster’, and it will be set up completely independently to the rest of the network, and I think ‘Great, if this goes horribly wrong, I can close that down and start a new one afresh’. I can even have more than one if I want, which you can’t do very easily with just standalone VMWare.

These can also be automated, so for SQL Response, we can spawn off a network of machines each night for our automated tests. This allows us to quickly identify the sort of complex problems we may otherwise only find late in the game.

Helen:
The other thing that might be interesting is the way that we approach manual testing. There’s a lot of talk about these ‘testing tours’ that James Whittaker wrote about in his book ‘Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test Design.’

The ideas have been around for quite a long time.  But rather than setting up the large lists of testing systems that take a long time and then actually become out of date very quickly, we’ve devised a manual test strategy for a ‘tour’.  If you were going on holiday, and you had one day in Paris, you would go and visit the Eiffel Tower and the Notre Dame. You would do the key things.  It’s a bit like that with this product. If you’ve got to build a backup, and you want to allocate a build, you will go through the key landmarks and look at compression, encryption, doing backup, the restore, but you might not necessarily be using e-mail alerts or setting up log shipping.

Andrew:
So you wouldn’t be following a rigid script?
Helen:
No, no.  Rather than documenting lots of test courses, you might say ‘Right I’m going to spend some of my testing time looking at the major landmarks for SQL Backup and then you can do a higher level…
Andrew:
But you’re using a sort of script are you?
Helen:
Well, you might have it at a high level.
Robin:
Yes, a high level one.  So it would describe the tour. These are the features of SQL Backup, and they should be explored . But it wouldn’t necessarily say perform a backup with exactly the same conditions and settings.
Helen:
Another example that we do quite regularly when doing Explorer testing, but less frequently during the development cycle, is running something like the ‘super model tour’. This is where you’re really looking at the usability and the UI, for the little aesthetic quirks. You’re actually looking at the software with that in mind, so you’re using the software and touching on all aspects of the UI, but you really are looking for nuances, with behaviour, with…
Andrew:
Right, so you couldn’t really automate that if you wanted to then?
Helen:
That’s exactly it, this is much more the human element.  Although there’s quite a few of these, like with the Fedex tour that I talked about, so that’s delivery between client and server, so I put in a request and want something back.  So we started trying to document a lot of these tours.
Andrew:
So is the documentation just describing the various strategies you adopt, rather than providing a prescriptive list?
Helen:
I think at the moment we just take it as an approach. Rather than saying “right, go away and do this, then that”, because we find an awful lot of bugs and problems just by running through the Explorer tests. You see, the more different approaches you can bring to testing, the greater variety of bugs you’ll find. For example, a strategy we use is to ask people to go in and that we’re going to give them a prize for as many bugs as they can find, and we’re also going to give a prize for the most interesting bug. But  it is amazing what bugs can come out of that fresh approach even after  testers have done their work, and developers have been using this application and it’s even gone out as beta to customers. In a bug hunt, people go in with a different frame of mind and find a huge array of problems. I mean they might be pasting War and Peace into a text box, but they might be just trying to connect to a computer that they know is suspect or that they know has some data on it, and we always find an enormous number of bugs.  But if we actually asked the clients, what is your strategy for trying to find as many bugs, some of them will be looking at how it’s communicating with the server.  Other people will be thinking ‘well actually I’m going to look at the UI, and I’m going to look for typos or some menu issues’.  They will go in with that frame of mind, so actually as a tester, if you define those up front, you can actually decide what your risks are.
Andrew:
So is this what Whittaker called the landmark tour?
Helen:
Yeah, exactly, he made a landmark tour and the intellectual tour.  The intellectual tour is somebody that is trying to catch out the software, so an intellectual might…well this is mainly like the American tourist tour wasn’t it?  So they might ask questions of the tour guide to try and catch out the guy, so they try and ask harder and harder questions.  So as a software tester, you might take that approach: I’m going to ask this application harder and harder questions to see if it can cope with some extreme edge case conditions.
Andrew:
Yeah, I see, yeah, so rather than a tour it’s more of a strategy.
Helen:
A strategy, exactly, yeah.  But I think a lot of people get turned off by testing because they think they’re  just going to be given a script of manual test cases like ‘open the application, open the file menu, open a blank document’. They think they’re just given these inputs which they don’t have much creativity in, but actually, if you approach it as a strategy, the likelihood is you’ll find some really interesting bugs and really test out the application without having to define very long, lengthy manual test kits.
Andrew:
What I hadn’t realised is that in fact you get a lot of time to play with scripting and to try out new techniques, which developers seldom get a chance to do.
Helen:
No, and that’s a really good example for the testing work on SQL Response, the project Robin is on right now. The developers are under very tight schedules to implement architecture exactly as it has been defined.  So they’re much more rigidly confined, whereas Robin’s given a task to write this manipulator that will stimulate conditions on the server.  He’s been doing something very technical, very interesting and absolutely under his own control.
Andrew:
… without a bank of testers to irritate him?
Robin:
Total creative licence.
Andrew:
So who tests the tester?
Robin:
My colleagues, the other testers here,  who use the test application that I write generally give me feedback over the course of time, and I also occasionally grab people and say ‘what do you think of this?’  I do actively ask people for their opinions, but I’m not overly strict on testing the test application because I know my colleagues who use my code  will be forgiving, and I can give effective feedback to them right away since communication is face-to-face. It’s not like sending an e-mail to a company and waiting for them to get back to me.
Helen:
You see, that’s why you get to do some really fun things without all the mundane stuff developers get.  Robin’s release cycle is an hour, to do an update, a new feature or thing. It’s very rewarding work.