In the 1950s, a well-designed survey could often achieve over 90% response rates. Since then, response rates have consistently declined.
But I was still a bit shocked the other day when a post on a usability discussion group quoted a ‘typical response rate of 2%’ as if that were something we all knew as a fact.
The stats are a problem
2% is a terrible response rate. Why? Because there’s such a big chance that the people who filled in the survey are different from the people who didn’t.
With any survey, you need to look at the profile of the people who responded and satisfy yourself that they are about the same as the people who didn’t respond – and also, that they’re about the same as the overall population that you’re sampling.
So we try really hard to design a sampling method that gives everyone in the population an equal chance of being selected as a potential respondent. Then that means that our sample is about the same as the overall population.
But if we have a poor level of response, we make it almost certain that there will be some important differences between those who responded and those who didn’t. The assumption that our sample reflects the population as a whole gets blown away – and with it, the possibility of doing any inferential statistics.
Inferential statistics? Making claims that the results from the survey reflect the results that we might have got from the population as a whole.
Now here is where a bit of opinion and judgement comes in. One side of the argument goes like this:
- the propensity to answer questions is an independent variable;
- we have no reason to suppose that people who like to answer questions are different from our population in any other respect;
- therefore, it doesn’t matter all that much if we get a low response rate.
My opinion is different. I take the view that if our respondents are only 2% of our sample then we already know that they are very unusual. The chances that they are unusual in many different ways are high. So that’s how I arrive at the view that 2% is a terrible response rate and blows away the rest of our statistics.
Where does the 2% ‘fact’ come from?
So where does this ‘fact’ about typical response rates come from? I believe that there are two influences:
- Almost any survey can achieve a 1% to 2% response rate. It seems that most populations have around 1% to 2% of people who just really enjoy answering questions. They can’t resist this strange pleasure, and they’ll answer anything. But do you want to base your business decisions on them?
- This sort of rate actually is typical for a full-page advertisement in a weekend supplement – and not all that unusual for cold-calling direct mail. The difference is that in advertising and in direct mail, we don’t make any assumptions about the people who don’t respond. We’re too busy concentrating on the people who do respond.
What can you do to improve your response rate?
Here are some tips for improving your response rate.
- Ask fewer people. Choose a small sample, make sure that those people know that they have been specially selected, and spend a bit of time and effort on following up with each of them. Feeling special makes people more likely to respond.
- Ditch the prize draw and use the money for an incentive that they get before filling in the questionnaire. A dollar bill sent with a mail survey gets a better response rate than ten dollars guaranteed on returning the survey. Prize draws have little or no effect on response rates.
- Make the questionnaire SHORT (yup, I’m yelling ‘short’ at you). Longer = more offputting.
- Make the questionnaire interesting. You may even have to resort to a little humour (but be careful when you test it).
- Test, test and test again. That’s how you’ll find out whether it’s short enough and interesting enough to get a response.
- Read a good book on the topic. I recommend Don Dillman (2000) Mail and Internet Surveys: The Tailored Design Method. Then do what he says.
What can you do with the data if you get a low response rate?
If you fear that you will get a poor response rate no matter what you do, then you can extract some value from the data by reducing your survey to a few open questions. Read the comments that people give you and think about them. You can’t tell whether they are at all representative of your population, but you’ll probably find that they do offer some interesting insights.
Survey: the entire process of defining a research question, developing a questionnaire that explores the question, finding a suitable sample of potential respondents, administering the questionnaire and then doing something with the responses.
Questionnaire: a series of questions, often with fixed spaces or choices for answers. Can be offered on paper, as an interview, or electronically.
Response rate: the percentage of people approached who respond with a filled-in questionnaire.
Thanks to Jurek Kirakowski for many discussions on this and related topics. The opinions expressed in this article are my own.
Image, Census Day by PaulSh, creative commons licence