Surveys – what is an acceptable response rate

large pile of completed surveysIt’s been a while since I ranted on about response rates on surveys. In that article, I took the view that “2% is a terrible response rate” and had a few reasons why and tips for doing better. Recently, I’ve had a couple of challenging questions on the topic. So here goes at trying to answer them.

What is an acceptable response rate?

First of all, if a 2% response rate is no good: what should you aim for?

The answer is in two parts: first of all, the question of non-response bias and secondly, an actual percentage.

Non-response bias happens when the population who do not respond display characteristics that are different to the population who do respond.

If you’ve got a properly random sample, and the people who respond are just as random within your sample, then you still have a random sample no matter how small the response. Indeed, you might even accept a 2% response rate.

But in real life it doesn’t work like that. With a very low response rate, you’re extremely likely to have found that people who do respond are unusual in some way: a massive non-response bias.

Whatever your response rate, you need to devote some thought and investigation to the reasons why some people didn’t respond. Most of all, you have to have a clear understanding of who you asked and who didn’t respond – preferably contacting some of them to ask why. And the lower the response rate, the larger your task is going to be on finding out why people didn’t respond.

So that brings us to the actual percentage response rate you should be looking for. Personally, I aim for 70% to 80%. You may take a different view. If it’s a difficult-to-reach population and an onerous task (for example, a survey that requires more than one round of questionnaire with some other task or intervention in-between), then you might accept 40% to 50%.

You need to balance these two factors:

  • the effort required to get people to respond
  • the effort required to find out why they didn’t respond (plus the risk that you’ll find some horrible non-response bias that destroys your whole survey).

How to get a good response rate

Don’t mistake me: getting a good response rate isn’t easy. You have to do all the steps thoroughly:

  • investigate why you want the data
  • interview respondents to find out what they might want to tell you
  • iteratively pilot the questionnaire itself, repeating until you’ve got something that gathers the data that will be useful and reflects what respondents want to say
  • understand the target population and devise a good sampling strategy
  • prime the respondents with an appropriate pre-questionnaire call to respond
  • administer the questionnaire (according to whatever methods you have trialled in the pilots)
  • track your responses and follow up with non-respondents.

Which all contributes to one of my little secrets: it’s far more usual for me to talk a client out of running a survey than for them to allow me to proceed. We do the ‘investigate’ and ‘interview’ steps – and that’s often enough for most business purposes.

How many responses do you need?

But let’s say you’ve got some burning business reason why you really need that survey data. Experience (and the research literature) suggests that you get a better response rate by choosing a smaller sample. So that led to this question:

“I could send out surveys to 20 people and get 15 responses for a 75% response rate, or send out 1,000 surveys and 100 responses for a 10% response rate. The response rate is lower in the second case but I have more responses overall. Do you know how you would measure/deal with this trade-off? Does it have to do with the estimated size of the population you’re trying to study?”

There are three components to this answer.

  1. It does depend on the size of the population, to some extent. If the population is small, you’re running the risk of choosing nearly everyone. That itself depresses response rate. But let’s assume that you’re planning a sample of no more than 10% of your population.
  2. The statistical theory: all the things about response rate/non-response rate that we’ve just been thinking about. That would clearly suggest that 20 people, 15 responses for a 75% rate is better – and contacting the remaining 5 to find out why they didn’t respond is going to be quite easy to do, probably.
  3. The ‘face validity’. Meaning, will your client / manager / colleagues think that they’ve been short changed? There will always be plenty of people who think that 100 responses are ‘more valid’ or ‘a better indicator’ than 15 – no matter how carefully you explain your statistical arguments.

So I’d recommend being pragmatic. Maybe survey 20 people, get your 15 responses at 75% rate, and treat that as a pilot. Find out why the remaining 5 didn’t respond, run your analysis, think about the decisions you’d make on that data – then decide whether it’s worth having another go with a bigger sample.

My thanks to Kurt Reyes and Stephen S. for asking me challenging questions – and taking the time to discuss them.

Further reading: Don A. Dillman (2007)  Mail and Internet Surveys: The Tailored Design Method

This article first appeared in Usability News

Image, Surveys to compile, by The Bees, creative commons

 

#surveys #surveysthatwork