“A Meeting without an Objective is a Chat”, so states the Book of Redgate. In my experience setting a high-level objective for a meeting is easier than getting agreement on the list of actions to reach it. Likewise setting the goals of usability tests seems much easier than forming the prioritised list of development actions or product features afterwards.
At Redgate we are always keen to use facilitation techniques such as Gamestorming to encourage participation from everyone in a meeting. If you are not familiar with Gamestorming, these are collaborative activities focused on resolving specific issues.
We decided to spend a “Collab Lab” session evaluating two techniques for closing a collaborative session – Dot Voting and Forced Ranking – to establish their relative advantages and get some experience on when we should use one and not the other.
Dot voting is a simple technique to prioritise a list of items into an agreed solution, these items could be actions or product features. The items are written on a whiteboard or by sticking Post-It notes to a wall. Each participant gets a set number of votes that they can cast on those items – they can even vote for the same item multiple times if they feel strongly about it. Some items may not receive any votes. In this exercise we used a whiteboard and the participants used markers to cast their votes, which makes it easier to remove and re-cast a vote compared to using sticky dots or making a more indelible mark.
In our session the list of 10 items were written on a whiteboard and each participant got 5 votes. All the participants cast their votes at the same time by marking an item with a dot. This is a public activity where participants can see the others’ votes being cast. After the votes had been cast they were tallied so that the items would be prioritised.
Force ranking is also a technique to prioritise a list of items into an agreed solution. Where it differs from dot voting is that every item must be ranked relative to the others. The important consideration for the facilitator is the framing of the question given to the participants – the criteria needs to be very clear. For example, “The most important features for the next software version”.
In our session the list of 10 items were printed out and each participant received a copy. The facilitator framed the exercise to target a specific criteria and then the participants were given 6 minutes to rank the items from 1 to 10, where 1 was most important and 10 least important.
After the time had elapsed the ranking of each item was tallied so that the items would be prioritised.
After both activities were completed the priority scores were compared and the favourite items ranked in order, as below.
Comparing the two activities there was little difference in the overall ranking, but there were variances in the results for 3 items (5, 7 & 8). The participants suggested that this difference was because Force Ranking made them consider the items in relative importance to each other, not just the most important overall. This made them re-evaluate items more thoroughly during Force Ranking.
Although a couple of participants force ranked the items easily in one go, most commented that they changed their minds a lot and wished they had an easier way to shuffle the items into order as they iterated. They suggested that having each item cut out would have made it easier to achieve this.
If you need to get consensus on a list of actions then either Dot Voting or Force ranking are great activities to get all stakeholders involved. The results are broadly similar for the most popular ranked items. Choose Dot Voting when the most popular items need to established; chose Force Ranking when you need every item ranked in relative popularity.
Here’s a summary of what we discovered about using the two different techniques:
- Easy and quick to establish the most popular choices
- Useful where not all choices are necessary
- Voting multiple times for the same item can establish the strength of opinion
- Simultaneous ranking activity – seeing the votes already cast could influence the casting of remaining votes
- Even though participants could change their minds, none of them changed a vote once cast. Even though the marker dots made this easy to do, some commented that they soon forgot where they had voted and were concerned that they risked changing a vote that was not legitimately their own
- Tough choices could be avoided since not all items may get votes. Facilitator should verify that items without votes haven’t been missed without reason, so it’s worth revisiting these with the participants to check this is the case before settling on the final list of actions to focus on next
- Takes longer than dot voting – can feel like harder work too
- Individual ranking activity – less likelihood of influencing others
- Tough to sort the middle ranked items
- No direct indication of strength of opinion for specific items other than the overall tally
- Participants reported that they changed their minds at lot when ranking – they suggested having the list as Post-Its or cut out lists that they could easily re-order
- Forces tough choices when ranking unpopular items
- Ranking all items requires more careful consideration of all items
Also in Blog
Earlier this year, we ran a DevOps 101 webinar in conjunction with Redgate to a predominately DBA audience. In this blog post our own DevOpsGuys DBA, Paul Ibison, gives his views on what DevOps means ...
Also in Software development
Software coding is never easy. We all know that. But what if you were given a challenge? In particular, what if you were asked to write a program to generate a maze, using any language you like, fro...
Also in UX Design
I moved to Cambridge UK in the summer of 2015. A bit of an Anglophile, it’s lived up to all expectations every Jane Austen book gave me of England. A hop, skip, and a jump away from London — i...
Also about CollabLab
In the most recent of our Collab Lab meetings (a weekly get together inspired by Gamestorming, where we try out games and facilitation techniques) we explored Force Field analysis. This is a technique...
Also about Collaboration
I am eager to share with you some of the challenges my team faced in the last few months and how infographics helped us to solve them, optimize our internal communications, and ultimately increase o...
Also about teamwork
Trying to tame complexity
Over the last few years my team ("DevOps" - the name's not entirely accurate, but close enough) have been putting together a handful of charter-like documents that we can ...