The unexpected consequences of compiling user feedback at collection
I’ve run plenty of face-to-face usability sessions, but since joining Redgate I’ve found myself having to run remote sessions. I’ve discovered a great technique and in doing so I have also stumbled upon some unexpected consequences of using it.
Redgate has a vast user base, and it turns out that a significant portion of it is located in the U.S (though this shouldn’t have been a surprise). Meeting users face-to-face for usability testing is less common and there’s a greater reliance on remote sessions.
Remote usability sessions – where to start
Having not previously run a session remotely, I turned to the UX Team for guidance. Among lots of helpful advice and feedback on my plans were these three tips from Adam Parker:
- Encourage members of the project team to join you for the sessions
- Ask those in the room to participate by collecting notes
- Set up a feedback framework for collecting those notes (this bit is crucial)
The first two points are sensible. If you’re like me, you’ve not got enough brain space to run the session and take notes, so ask someone else to do it. By asking people to help you’re doing two things; lightly introducing them to UX and giving them something to concentrate on so they don’t end up staring out of the window.
I should point out that the project team involved in this example needed no encouragement to join the session or to stay focused during it.
The third point is where it gets interesting – “set up a feedback framework”. Here’s what Adam suggested:
Get participants in the room to collect notes in one of four categories – ‘good things’, ‘surprises’, ‘known errors’ and ‘unknown errors’.
As this is Redgate, sticky notes were compulsory.
So it was green notes for “Good things” – a.k.a. “Hey! Our hypothesis was right”, blue notes for “Surprises” – a.k.a. “Well I did not expect them to say/do/try that”, orange notes for “Mistakes made knowingly” (they got it wrong and realised) and finally purple notes for “Mistakes made unknowingly” (they got it wrong and had no idea).
Here’s what I ended up with after my first session – pre-categorised items of feedback, compiled onto each of the three screens that were tested during the session:
Unexpected consequences
For this product feature I ran three sessions and at the end of it I had collected a serious quantity of feedback comments. The compiling had been done but I still had to digest the content – and here’s where the surprise came.
I stuck up every comment on the wall underneath the corresponding wireframe. Each category was represented with a column. Once I’d put all the comments in their columns I ended up with this incredible, and unexpected visualisation:
With one glance it’s possible to see where we got everything pretty spot-on (the left screen), where we did well but where one or two question marks still existed (the middle screen) and then finally where we’d done well, but there were still a lot of questions left unanswered (the right screen).
When you stand in front of this with other members of the team you’ve got pretty compelling evidence as to which area(s) to focus attention on. All for free with the help of your ‘in the room’ participants.
By the way, there’s very little de-duping going on here, but I think that helps emphasise which areas require attention.