One facilitator good, four facilitators better?

training room with facilitatorsI’m a lone consultant, and occasionally I get lucky: I persuade a client that it would be great if we could have a trained observer/logger helping me to conduct the usability test. That’s about as elaborate as it gets.

But the other day an intriguing question came up. Someone had joined an organisation where there is big pressure for numbers and short time-scales. So they had been conducting tests with three or four facilitators, 35 participants – and then spending an hour together to combine results. She thought that wasn’t a great way to do it and was looking for advice on something better.

Now that’s way outside my experience (35 participants? Heck, I’m lucky to get 8) so for once I didn’t pipe up with advice. But Carolyn Snyder did, and her advice struck me as so helpful that I asked her if I could base this month’s article on it. So here we go, over to Carolyn:

Cut down the numbers

My first reaction is that those numbers are overkill, and they’re leaving a lot of data unanalysed. It’s inefficient. Try convincing them to cut down to a maximum of 4 facilitators and 20 tests (which is still a lot, but pick whatever number you think they’ll accept in the short term). That will make time for some activities that’ll help them to co-ordinate what they are doing.

Pilot testing

All the facilitators should do at least one pilot test together, with one of them (or some other ‘friendly’) playing the role of user. They should discuss things like, ‘Do we want to start this task from the home page, or from where the user left off?’ and ‘Do our task instructions contain any inadvertent clues?’

Caveat: don’t prematurely decide where the problems are; the focus is on covering the areas of interest and removing the most egregious sources of bias.

Independent analysis first

Have each facilitator spend an hour going over their own data to pick out the key issues, before they get together as a group. What I typically do is have people write their top 10-20 issues on index cards and bring them to the meeting, where we do an affinity diagram. This technique reduces the risk that the analysis meeting is dominated by one or two individuals.

Methodology debrief

After the group analysis, have each person identify a handful of issues that were found by other facilitators (which is easy when they’re written on cards), and think about why s/he didn’t find them. Some reasons have little to do with facilitation, eg ‘we didn’t get to that task,’ or ‘that user knew something the others didn’t’. But the things you want to surface are those pertaining to the facilitator, eg ‘I gave the user a hint,’ or ‘It didn’t occur to me to ask that question’. Have them talk about those things, and also things they did that seemed to work well.

Caveat: don’t let facilitators criticise each other’s methods. Assure them that even the most experienced usability specialists don’t always agree, and the goal is to learn the others’ methods and facilitation style, and adopt successful ‘tricks’.

Keep working at it

Over time, the result will be a smaller amount of data in a more similar format, simply because the facilitators will become more like each other. Then the group can move toward further standardising their data collection/analysis if there is a need. (They may also eventually realise that their methods are still too redundant, and cut down further on the number of tests or facilitators.)

And a note from Caroline

Thanks, Carolyn. Although I can’t quite envisage running tests with so many users and so many facilitators, I know these tips will be helpful for mentoring teams where their usability testing practice has maybe drifted a little and they’re having trouble deciding how to bring themselves back in line with each other.

Carolyn Snyder is a usability consultant, and author of  Paper Prototyping. Her website is at http://www.snyderconsulting.net/

This article first appeared in Usability News

Image, Training facilitators, by USFS Region 5, creative commons