UX katas – heuristic evaluation
The “extended” UX team sharpening their swords…
At Redgate, our software engineers run regular “code katas”, workshops where developers practise their coding skills on simple problems, then discuss the experience afterwards as a learning exercise.
So, a while ago, I thought it might be a good idea to use the same Kata principle on UX tasks and activities. The aim is to help us:
- deliberately learn new skills (I’m very keen on “deliberate practice”, since I saw Kathy Sierra’s talk at Business of Software last year)
- sharpen the skills we don’t use that often as part of our day-to-day project work
- spend time together as a team
- share practices with people who are involved in UX activity outside of our team (for example technical authors).
We organised our first kata last week, on “Heuristic evaluation”. The session lasted 90 minutes and I thought I’d share how it worked.
Session plan
Intro (10-15 min)
We spent the first 10 to 15 mins reviewing the theory and methodology of heuristic evaluation, for example, when it is suitable and when it isn’t, how many evaluators you need etc. (You can read more about this here: http://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/)
We also reviewed the heuristics themselves, using the classic list of “10 usability heuristics for interface design” from Nielsen and the review checklist by Deniese Pierotti from Xerox. Whilst the checklist isn’t *quite* always applicable to websites, it gives a good sense of what type of issues relates the the different heuristics.
Evaluation (45 min)
Our aim was to evaluate the revamped Ryanair website. The format at this first stage was for each attendee to review the site individually. Whilst not strictly necessary, I’d prepared some user scenarios to help focus on the tasks. This meant we’d look at the same part of the site, thus increasing the number of different findings.
The scenarios used for our Heuristic evaluation Kata
The Ryanair website as per Jan 2013, thanks to Wayback.com
Ryanair homepage as per July 2014
Review (20 min)
I split the attendees into two groups of five, hoping to demonstrate the impact of the number of evaluators on the results. Each group discussed their findings and listed the “issues” they found. Both groups found in total between 40 and 50 problems.
I am sure this won’t be a surprise for most of you: overall, we found that even after being revamped, the Ryanair site had a lot of usability issues. We assumed some of these are deliberately designed to be confusing for the users and trick them to buy something they didn’t anticipate (like the “travel without insurance” menu item in the country list). Some of them are just affordance issues, and could improve the site without compromising Ryanair values (like making the “book now” text in the journey planner clickable)
one page of the many we produced…
Debrief (10 min)
We took a bit of time to discuss the Kata format, and what we’d need to improve for next time.
Findings:
Here is what we came up with?
Conclusion:
We found a large number of issues in a short amount of time, but due to time constraints, didn’t have time to prioritise them during the session. I’d forgotten about the values of heuristic evaluations, and when there are a few of you, it is really a good thing to do.
If you are lucky enough to be a group of UX practitioners in your organisation, I’d highly recommend running a kata like this. The 90 minutes were well worth it: not only did we review (in some cases, learnt ) the heuristics, but it was also fun and we had a good laugh too (often at the expense of Ryanair).
There is still some rom for improvement, though. We reckon we’d need to give access to the tools and methods to the “non-ux-ers” more time in advance, so they can be more familiar with the techniques before using them…. We’ll certainly try and do this for our next Kata: cognitive walkthroughs.
Now, I am considering whether we should send the list of issues to Ryanair… What do you reckon they’d say