Process or outcome? Measuring the success of usability

boy standing in front of a banner reading 'success is measured by...'The final word is obscuredHow do we measure usability when the start and end points are hard to define, and our work is just one intervention? Maybe real successes come person by person, as attitudes change.

A friend has been working with a client for 9 months. She’s sort of been ‘usability midwife’ to the birth of their redesigned website: a major development, loads of nasty back-end integration problems, the effort of many teams. Her role was to conduct a series of usability tests on different iterations. And she told me it had finally launched.

“So, are you pleased with it?” I asked.

“No, it’s horrible” she said. “But it’s much better than it might have been”.

Which led me to brooding on how we measure our successes.

The theory of measuring usability success

Measuring the usability of something is quite easy: define your product, users, goals and context. Run a measurement evaluation (also known as ‘summative evaluation’). Publish your report. You’re done. There’s even an ISO standard for reporting these results: the “Common Industry Format”. And you’ll find discussion of the details of how to do it in any of the standard textbooks: my own, for example.

But that’s a different matter from measuring the success of ‘usability’, meaning a single or series of user-centred design activities. I’m going to call this a ‘usability intervention’.

To measure the success of a usability intervention, we need to know:

  • what the starting point was, ideally by measuring the usability of the product before the usability intervention
  • how the usability intervention affected the product
  • what the finishing point was, by measuring the usability of the product again.

Do some sums, and bingo: you’ve got a return-on-investment success story.

The challenge of the success story

The problem is real life, as usual.

The starting point is often unclear. Was there a particular product that could be measured? Was the whole thing just a sketch on a whiteboard? Was the organisation willing to allow any statement of the start point to be created, even for internal use?

The effect of the usability intervention is often unclear. Many people worked on the product. Some of them were influenced by the usability intervention; others were not. Even if we take the simplest case, a single usability test with a bunch of findings and recommendations, the influence can be hard to pin down. The team working on one area of the product might fight you line by line and reluctantly implement the least invasive of the suggestions. The team working in another area suddenly ‘gets it’ and starts behaving in a wholly user-centred manner, coming up with lots of brilliant ideas of their own that markedly improve the product. It’s easier to measure the effect of the intervention on the ‘reluctant’ team (but it won’t be much). The ‘get it’ team will achieve a more usable product: but was that solely the result of the intervention?

Even the finishing point is often unclear. Final release? The version immediately after the round of changes following one specific activity? The whole product, or just the bit of it that was worked on intensively in the intervention?

The measurement of process improvement

So we often fall back on intangibles, like measuring the ‘user-centredness’ of an organisation.

A while ago, a large organisation that I worked with decided to try to do this. While a few of us were fighting in the trenches of some tough product development, the consultants came in. They ‘measured’ the ‘user-centredness’ of the organisation by doing a few interviews here and there. They did a bit of proselytising of user-centred design methods, and then they repeated the measurements. Bingo, things had improved.

But what had actually happened was that our line-by-line, recommendation-by-recommendation fights for change had been going on at the same time. And we’d had enough successes that the products had in fact got better. And a few teams had ‘got it’ because they’d seen what was going on.

The consultants published their success. Really, it was our success. But it was a very, very complex situation and we never published:

  • The start point wasn’t obvious. There were many good reasons why the organisation didn’t want to state in public how bad the situation was before we began.
  • Many, many changes happened all at once. Some of them were influenced by usability activities, others weren’t. It almost seemed random: we’d make some recommendations, things would happen, no obvious connection.
  • And like my friend, we ended with something that was much, much better than it could have been. But still horrible.

Celebrating the capture of hearts and minds

In the end, I don’t think either type of measurement really works. I think the real successes come person by person, as attitudes change. And sometimes that will have no effect on the current product at all. I know my friend has been successful on her project, because I know that she’s created usability evangelists from a whole slew of people who are now inspired to keep working to turn their horrible website into a better one, release by release. And that’s what we’ve got to celebrate.

This article first appeared in Usability News

Image, Measured Success, by Richard Elzey, creative commons

featured image, Success Key, by GotCredit, creative commons