Chapter 2 Sample: Decide how many people to ask and how to find them

On this page I’m including sources for the data I quote in the ‘Sample’ chapter, plus additional information and further reading on response rates, visualisation of small samples, and setting up interviews.

Sources of data on response rates – National Statistical Institutes 

I chose a few of the national statistical institutes (NSI) to look at their response rates. Nearly every country in the world has an NSI.  

So far as I’ve been able to find out, the US is the only country that has lots of them, of which two of the best-known are the Census Bureau and the Bureau of Labor Statistics. 

The US Census Bureau publishes response rates for its surveys. For example: 

Statistics Netherlands publishes many of its reports in English. Their report on Response enhancing measures for social statistics mentions that they typically get responses rates around 65%: 

The Australian Bureau of Statistics managed to improve its response rates between the 2006 and 2011 Censuses, bucking the general trend of declining response rates. They got over 95% response (non-response under 5%) in all states and territories other than Northern Territories – an area with few roads, and a relatively high proportion of indigenous people who may move around – where they still managed to get over 92% response (non-response under 8%): 

Sources of data on response rates – academic survey 

I chose some examples of academic surveys, starting with one of the biggest. The European Social Survey “measures the attitudes, beliefs and behaviour patterns of diverse populations in more than thirty nations”. It is run by a consortium of academic institutes and universities. The 2016 response rates varied from 31% in Germany to 74% in Israel. 

insert image here

 Response rates in the European Social Survey from 

 Response depends on perceived effort 

There’s a further consideration about perceived effort that I decided not to include in the chapter for reasons of space. Perceived effort and reward can impact response quality, and there’s the phenomenon of “satisficing”.  

 More about responsiveness and response rates 

Elizabeth Martin, a leading survey methodologist, tackled the problem of representativeness in her 2004 Presidential Address to the AAPOR (American Association for Public Opinion Research). 

“Low response rates do not mean that nonresponse bias is present, but they leave surveys more vulnerable to its effects if it is present” (Martin 2004). 

Ineke Stoop’s PhD thesis dives into this topic in more detail (Stoop 2005). Considering it’s a thesis, it’s very readable – and it’s got my favourite thesis cover ever. 

 insert image here

The Hunt for the Last Respondent: Nonresponse in Surveys by Ineke A. L. Stoop 

One example: in one study, paying a higher incentive to increase the response rate achieved a higher response from women and the over-50s but not the other groups (Moore and Tarnai 2002). 

Visualisation of small samples 

It can be difficult for some of us to grasp the idea that the patterns we see in small samples arise from the sample size, not necessarily from the underlying distribution. This visualisation has a set of samples from a normal distribution where the samples when put into histograms look like all sorts of things. Putting them into xy charts makes them look less variable, but still with lots of outliers (which are not outliers at all): 

Setting up an interview 

For ideas about how to do interviews, start with Andrew Travers’s book “Interviewing for research” .

It’s practical and thoughtful advice and has the benefit that he decided to make it free to download a few years ago.  

It’s always important to check that whoever you involve in research is comfortable with what you’re planning to do. The UK local government organisation Hackney Council has a consent form that is clear and can be adapted to a variety of types of research: