Software Tool design: The Three Rs

To understand the full extent of the requirements of your users when you are redesigning a software tool, you have to talk to them, and observe how they are currently using the package. For this sort of research, there is no established rule-book, but there certainly are pitfalls and rewards. Stephen Chambers offers his advice and tells of some experiences.

Part 1: The Three ‘R’s – Research, Research & Research


Designing a successful new product is almost impossible. Redesigning an existing, well-known product is even more stressful. We know, because we’ve just experienced it. While the experience is still fresh in our minds, we’ll explain how we did it, in case you’d like to learn something from our experience.

A little background…

There comes a time in the life of every software product when you have to consider a redesign which will vary in its magnitude. It is not a decision to be taken lightly. However, we had to do something, because the underlying architecture of the current version of our ANTS Profiler tool, a .NET code profiler, was starting to feel the strain of modern, heavily multi-threaded, applications on 64-bit machines running under Vista. Our product had been the first one of its type, but competitors had leapfrogged our design, and were cutting into our market share. Early in 2008 the ANTS team, consisting of developers, testers and a usability engineer, set about the task of redesigning ANTS Profiler.

It’s good to talk

So where do you start when starting such a task? You could rely on the advice you’ve been getting in forums. You could even use your own intuition. Both are, of course, bad ideas. We decided to tackle the challenge by:

  1. Talking in depth with users of our software (downloaders and customers)
  2. Evaluating our current software through usability sessions
  3. Validating our findings on a larger scale with questionnaire data

Talking with Users – The Matrix Effect

Our plan was simple: We would speak to 60 people who had recently downloaded our product. The number 60 appealed to us, because we felt that the enormity of the task would drive us on, even after reality began to kick in after the third phone call.

We needed this
information while
the details and
our product were
still clear in their

The whole business of requirements-gathering is a team activity. Because we all come with opinions and ideas, we can unintentionally influence the process; Because of this, it is better for several people to be involved, including the developers, so that any bias will show up, and is more likely to be ironed out.

We prepared a general script of what we’d want to discuss so that we didn’t end up with 60 potentially random conversations. We especially wanted to talk to people who had downloaded the product within the last 10 days. We wanted to hear about why they needed to search for a product such as ours, how they used the tool and how successful they’d been. We needed this information while the details and feelings about our product were still clear in their minds and, hopefully, give us the details we wanted. We also contacted existing customers, those people who had actually purchased our software in the last year, and went over the same structured conversation with them.

We learned three important facts:

  • Make sure you get your time zones right when phoning someone in the USA from the UK.. They aren’t too talkative about profiling at 5am their time while still in bed. Lazy fellows!

I even had one
person tell me
gleefully that they
had purchased an
alternative product
from a competitor.

  • Cold-calling people who have downloaded your product is a difficult and tiring task. I developed a great respect for our sales team. Occasionally, it was also an uncomfortable experience. It is sometimes difficult for a programmer to resist the opportunity to inform me, as the embodiment of ANTS, what they thought of me and why the $*&^ was I not working with their 64 bit application! While they pause for breath, you start to try to pacify them, but they’re off again with “and another thing…”. It is an experience that stays with you. I even had one person tell me gleefully that they had purchased an alternative product from a competitor. Ouch! No need for that! However, there were many conversations which made it all worthwhile. People were telling us detailed stories that were pure gold as well as proving us with a friendly contact to whom one could show the new designs to – a double bonus.
  • If you restrict yourself to speaking with only your existing customers you often end up hearing a very familiar story that may go something like “I had a problem, needed a solution, found your product, tried it, liked it, and bought it. Great stuff.” This is dangerous, and far too cosy. If pushed, they might volunteer the occasional “I guess it might be nice if…” and “or maybe…”. If, instead, you talk to a group of users who have simply downloaded your product but haven’t actually bought it, you may see a very different picture: We did. In many ways these are the most enlightening and varied conversations of all and, at times, the most painful.

If you only speak to your existing customers, you get lulled into a cosy, but distorted reality much like Keanu Reaves in the Matrix had prior to choosing the red pill, which of course changed his view of the world forever. Your product is probably indispensable to some customers and they’ll meet the idea of change with suspicion and apprehension which is in fairness an understandable reaction. A common argument that change isn’t needed is “We get lots of positive comments about our product” or “lots of people love our product”. Yes, of course, but then how many people actually think the opposite and haven’t wasted any more of their time to tell you?  

Usability Testing your Application – who should you use?

We “eat our
own dog food”.

We use ANTS Profiler ourselves when developing all our range of products: We “eat our own dog food”. Developers are our target market and we are a software company, so that seems sensible. I therefore had, in the same office, a number of experienced developers who used the product frequently, a number who had used it from time to time and our more recent junior developers who had never used it and knew little about profiling. As sample populations go it was surprisingly close to the distribution of expertise in the wider population of ANTS users. They were therefore ideal for usability-testing.

We saw at once that the users assumed that they could navigate using a code view, and obviously needed to do so, but the functionality simply doesn’t exist in the current version. It was particularly amusing watching the futile attempts of novice participants searching for this functionality by energetically single clicking, double clicking, right clicking, searching options menus and getting creative with various shortcut combinations in search for a feature which the application simply didn’t have.

Users were being forced to choose a different workflow that was much less efficient and it wasn’t what they actually wanted to do. It was obviously wrong, but here was a surprise; of all the 750 posts in the support forum for the current version, there was not one request for this particular feature. How many companies decide the future direction of their product based simply on forum feedback? The focus of a forum is to report problems and find solutions, but not to suggest new features. The suggestions on the forum typically suggested features that would suit a single person, but leave the majority of users cold.

Expert/Novice Differences during testing

The difference in work strategies between experienced users of the application and novices became immediately apparent. Developers who were experienced with the product had added additional steps to their initial workflow which, at first, seemed more time consuming. Actually, it was an ingenious strategy that had the effect of reducing the amount of information that they were presented with in the end, making the more tricky analysis section a great deal easier. This ‘trick’, as one developer called it, would lead to a much more efficient analysis stage. Almost none of the participants that were new to the product were able to benefit from this advanced strategy; they didn’t know about it.

These observations are important for two reasons. Firstly, it gave us an insight into the ways that users wanted to use the tool, and showed us the important shortcomings that had to be tackled when the design started for the next version. Secondly, it highlights the importance of testing your product across the whole spectrum of users. Humans are, by their very nature, rational and adaptive but depend on the information they have at that particular time and their understanding of it. If you only test the application on naïve users, you may then introduce features that are of no use to, or could actually impede, the experienced users. By looking only at expert users, you’ll often spot strategies that indicate how the product should be developed, but you may end up with a product that is tricky to learn.  

A second anecdotal observation from testing experienced users of our software was that when it came to demonstrating their usage of our profiler, they picked scenarios and tasks that demonstrated some of the shortcomings of the tool in its ability to meet their current needs or workflow. On the other hand, novice users tended to pick simpler and lighter applications, which ANTS Profiler could easily cope with and therefore didn’t test the profiler to the full. It is important to be aware of these factors while you carry out your research.

It was also important to identify those features that definitely shouldn’t be changed; we found that it wasn’t necessary to have a separate usability test for this because those details bubbled to the top across all groups and were more obvious almost from the beginning.

Validating what you think you already know – Questionnaires

Once the bruising had gone down, we structured the valuable feedback from the telephone conversations into a questionnaire. We wanted to check, with a wider audience, on what we’d been told. We wanted to confirm that what we had been hearing on the phone struck a chord with a larger sample. So, we put together a short list of questions which would take no more than 5 minutes to complete. We advertised our questionnaire and received over 350 responses within 2 weeks. Perfect.

The questionnaire cleared the air. Whilst it generally confirmed what we’d been told, it clarified one or two puzzles. We had, for example, mentioned the idea of expanding the functionality and power of our profiling API. “Great idea” was all I heard. I then asked how they used our current API. “Umm…. I didn’t but I’m sure someone out there would find it useful”. I never did speak to that someone, but the API didn’t seem to be used by anyone we spoke to. Who was this ‘someone’ we wondered. To find out, we asked the users to rate the value of ten features of the product. The idea of a more powerful API ranked lowest by far!

It was nice to see that the items most highly rated were the same as in the responses from our conversations.

By cross-checking the results of the phone conversations with the questionnaire, we felt more confident that we had received a clear and consistent message.


In this article I’ve tried to provide an insight into some of the methods that we employed when attempting to understand what the user requirements were for our ANTS Profiler project. Testing participants who differed in their experience with the tool and the tasks used led to many different observations that testing a singular group could ever unveil. Talking with people who downloaded our product but found it insufficient for their needs provided the greatest insight of all into what our product needed to achieve to once again lead the market. If we hadn’t done the research, the focus of the product would be completely different. There would be no code view navigation, we would have a very powerful API that nobody was using and it’s doubtful if a call graph feature would have made it in.

Our process wasn’t perfect by any means. For example, we realised at one point that we simply couldn’t answer some key questions which led to pausing the design and implementation phase until we had the answers. An expensive decision but how much would getting it wrong have cost in the end? This error was caused by not adapting the scripts as our conversations continued. We were getting the information we wanted but not adapting our focus enough to understand sufficiently the new information we were receiving.

Our process
of cold calling
was undoubtedly

Additionally, our process of cold calling was undoubtedly inefficient. If we had an established feedback mechanism from our sales team to the project team prior to starting the research for example, we would have saved a lot of unnecessary work and time. Having a list of potentially useful contacts who had already been contacted as part of the normal sales process and had agreed to talk about their experience would have been a great time saver. A large number of people were happy to get involved as they knew they were being genuinely listened to and that it may lead to the creation of a tool that could benefit their working lives. Cold calling users in the middle of their day who weren’t expecting your call led to a lot of wasted time and dead ends due to inaccurate contact information, all of which could be avoided if a better feedback system had been in place.

I suppose the real lesson to learn is to cast your net far and wide when designing or redesigning a product. Don’t restrict yourself to what you think you know. Instead concentrate your efforts on understanding how your software is being used, if it fits perfectly with how people want to use it and investigate who and what you are missing out on. When it comes to knowing your users needs well there really isn’t an excuse for not doing your homework.  

Part 2 of this article will reveal how we persuaded everyone to get involved in the design stage. It involved a rather underhand approach from our division manager…

Since this article was written, ANTS 4 Performance profiler has now been released and is available for download from here