- my satisfaction with a huge website
- the effectiveness of a selection of ways to maintain or increase charge-out rates
- the cleanliness of a hotel room.
My problem? I didn’t know, didn’t care or didn’t remember. Sometimes all three. So I skipped the question or junked the questionnaire.
There’s a lesson here for choosing the ‘correct’ number of points in a rating scale. (I’m assuming that you need to offer a number of discrete points rather than offering some widget that allows a continuously variable scale).
Some people take the view that you should have an even number of points. This encourages your respondent to move off the fence and plump for a positive or negative rating.
I disagree. When I’ve done usability tests of questionnaires, I’ve found that users have many reasons for choosing the middle point. As well as ‘don’t know’, ‘don’t care’ and ‘don’t remember’, users have told me that they chose the middle point because:
- they didn’t understand the question
- they sometimes felt positive and sometimes felt negative
- they mostly felt quite positive but occasionally felt very negative (or vice versa)
- they felt positive about some aspect but negative about another
- or, (and could this one be particularly British) they felt that it wasn’t up to them to express an opinion on this item.
So I take the view that you can’t force someone into a positive or negative opinion simply by taking away their middle point. They still have their opinion, and will express it in one of these ways:
- skip the question
- scrap the questionnaire
- split their votes: choose either mildly positive or mildly negative, at random, and make it stand for a middle point.
In effect, the ‘split’ tactic means that scales with an even number of points lose two of those points to the middle.
You can sometimes ease respondents away from clinging to the middle point if you offer a ‘don’t know’ or ‘not applicable’ option. The problem with these extra options is: where to put them? If they are close to the main rating points then they look like extra points in the scale and get confused with it. If they are separated from them, they get missed.
So, it’s an odd number of points for me. I’ve met some user resistance to 3-point scales as they want to express milder and stronger shades of opinion, so I generally opt for 5-point scales. But when I’m testing the questionnaire I hear things like “oh, I’d never give a 5” (when 5 is the best mark) or “I like to be generous so I’ll tick 5” – contrasting strategies even though the expressed level of satisfaction is the same. If interrogated closely, I’ll admit that when analysing data for myself I sometimes, bizarrely and irrationally, treat the points as if they expressed concrete difference and try to do manipulations like working out the average score out of 5. A temptation that I resist when working with client data. Then, I do the sensible thing and group the two positive points together for analysis and the two negative points similarly. I’ve noticed that market researchers often do the same, and will report ‘top two boxes’ together. They’re not being over-optimistic – simply reflecting the lack of difference in the opinions expressed in the two boxes.
I’ve never tried 7- or 9-point scales myself. I’ve seen them used in questionnaires on psychological topics where, presumably, the experimenter has decided that the subjects really do want to express fine shades of meaning. In the more mundane world of the everyday satisfaction questionnaire, respondents’ time is too valuable and any theoretical benefit from the extra points is certainly offset by the possibility that respondents might be put off by the extra work entailed and by the challenge of deciding how to group the results.
As in most things in usability ‘your mileage may vary’. Let me know if you’ve got views that differ from mine.
This article first appeared in ‘Caroline’s Corner’, in the August 2003 edition of Usability News.
featured image by Relly Annett-Baker, creative commons