Differences between participants and users: representative or not?

‘“Rule 1 for usability testing: get representative users”

Read something like that? Said something like that? I certainly have. And I definitely agree with it, on the whole. But not always: so I thought I’d muse on the issue in this month’s column.

Why we need representative users (but don’t always get them)

Usability is about “specified users with specified goals in a specified context of use”, to use the inelegant but accurate wording of ISO 9241:11. So we need participants to be ‘specified users’. And given that we’re unlikely to be able to get every one of the specified users to take part, we compromise and aim to choose them so that they are ‘representative’.

But it’s not always possible or convenient to find representative users. Or maybe we’ve recruited what we think are indeed representative users but they turn out not to be as representative as we hoped. Does that mean the testing was a waste of time?

My view is: probably not. Any testing is usually better than no testing. Let’s look at one particular dimension: experience with the product.

Thinking of users according to their exposure to the product

Your product could be a website, application, device, whatever. Does it feel like the centre of your world? OK, so let’s envisage that. We’ve got you and your closest colleagues or clients: they’re the core team working on your product. Surrounding the core, think of a ring of people with expertise in the concepts and ideas of your project – maybe they’re marketing people, sales people, power users, trainers. Surrounding them, think of your repeat and regular users. And then the outer ring: novices, occasional users, new customers.

Every repeat or regular user was a novice once; power users used to be regular users. Maybe your core team were never users, but there’s a good chance that they have a distinct grasp of how the thing should be used. So as we progress from the outer ring to the core, we’ve got a gradual increase in knowledge.

sculpture of concentric circlesGood experience flows in but not out

For most of us, the ‘specified users’ are mostly to be found in the two outer rings: the novices and the repeat users. And often it’s the outermost novices who are most important. We want to provide a good first experience, and then hope to convert them into repeats.

If a novice user finds the product easy to use, then chances are that the inner rings will do so as well. It’s not perfect because maybe there’s some feature that only a power user would use and maybe that’s not working too well. But overall, a good experience in the outer ring is likely to flow into a good experience for the users in the more experienced inner rings. So we often define our specified users in that outer ring and aim to recruit them for our tests.

But it doesn’t work the other way around, from the core outwards. If you’ve done a few usability tests, I’m sure you can quote many examples to me where your core team thought the product would definitely provide a good experience – and were sadly proved wrong.

Problems flow out

Now let’s think about problems. If someone in the outer ring hits a problem, OK, that’s what we’re testing for.

What if someone in the core team tries to use the product and hits a usability problem? Well, the chances are that problem will flow out irrespective of knowledge: it will also be a problem for less experienced users.

Or what if a trainer tries to use it and points out a few defects? They’re not ‘representative users’ – but you’d be wise to pay attention to them.

It’s not perfect, because sometimes they’re stressing about something that the users will never notice (see my article on colons on the end of labels for an example). And I’ve had examples where the team spent so much time on problems that the inner rings had spotted that they never got to the target users at all until it was far too late. But more often, a modicum of unrepresentative testing has been easy to organise and well worth while, picking out boringly obvious issues that had somehow escaped our notice.

Thinking about how your participants differ from your users

For me, it all comes down to ‘how do these participants differ from my specified users’? And they will – if not in experience, then in some other way such as attitude.

I’ve learned that it’s worth asking myself that question after every test, every participant – even where I think my recruitment process has been immaculate. It helps to guard against the ‘it’s not perfect’ places where I might be over-complacent about finding good experience, or (less often but still important) over-anxious about problems.

This article first appeared in Usability News

Image, Concentric, by David Edwards, creative commons