Inside Red Gate – Testers

Developers might write good code, but no matter how good they are the result will always have bugs in it. It’s up to the testers in the team to make sure the final product is as bug-free as it can be.

Deciding what to test

Within a project there’s normally no official documentation produced, no official record of what the project should accomplish. The closest thing we get to documentation is the greenlight presentation slides (I’ll be covering the greenlight process in a later post). This means there’s no specification to validate against, no way of confirming that the project is ‘complete’. So how does a tester know what to test in the first place?

Within Red Gate, the same team normally stays on the same product for several minor and major releases (as an example, the same team has worked on SQL Source Control since it’s inception back in 2009). Everyone within the team has an intimite knowledge of what the tool does, what problems it solves, how customers use it, and what the main bugs and issues are. Testers are as much a part of the team as the developers or project manager.

This means testers simply don’t need a specification. They know how customers use the tool, what bugs they’re running into, how new features should interact with existing ones, and (roughly) how they actually work. The testers are fully involved in the project from day 1. This gives the testers deep knowledge of the application and application domain, so they know where and what they should test to ensure the final product works.

How to test a product

As with other things at Red Gate, there’s no officially-mandated way of testing a product, no documentation that needs to be produced. It’s entirely up to the testers to decide how they want to test the product. Most of the time this will involve writing automated unit and integration tests to run on the build server with each new build. When testing the user interface, this usually means manual testing of all the different parts of the UI to make sure it all behaves sensibly whatever actions the user performs (although some testers are experimenting with automated UI testing).

As a product matures, it gains more and more tests. Not just tests for new features, but tests covering specific bugs encountered by customers. SQL Compare, our oldest existing product, now has over 30,000 unit and integration tests, whereas SmartAssembly (the product I’m working on) was acquired by Red Gate a couple of years ago, and has about 1000 unit tests and counting; Jason’s writing more all the time.

At the end of the project, it’s up to the testers to give the final go-ahead to release the product to customers. They need to be satisfy themselves that the product is as bug-free as it can be and the new features work as they should, on every configuration the product will be run on, using whatever tools and methods they see fit. Only then does the installer go up on the website, and the online documentation and product web pages updated with the new features and new documentation.

Testers are an integral part of the project team, and only they decide when the product can be released. They act as Red Gate’s gatekeepers and quality control.