Chapter 3 Questions: Write and Test the Questions

In the chapter on Questions I look at what you want to ask, what people want to tell you, and how to write good questions that people can answer.

The topics in the chapter are:

  • The four steps to answering a question
  • Good questions are easy to understand
  • Good questions ask for answers that are easy to find
  • Good questions are comfortable to answer
  • Good questions make it easy to respond
  • Test your questions in cognitive interview

The error associated with this chapter is measurement error, which happens when the answers you get do not align with the questions you ask.

I couldn’t give all the appropriate origins and suggestions for further reading in the chapter, so here they are.

Origins of my four-step model of questionnaire response

If you’re a keen reader of the survey methodology literature, you’ll have spotted that I’ve adapted the four-step model of questionnaire response. I use the steps “understand, find, decide, respond” whereas the model presented in The Psychology of Survey Response (Tourangeau, R., L. J. Rips and K. A. Rasinski, 2000) uses the terms “comprehension, retrieval, judgement and response”.

Tourangeau, Rips and Rasinski’s model has nouns (“comprehension” and so on). I tried using the nouns when teaching survey classes and the people who attended the classes found them quite confusing. I swapped to using verbs and found that they worked better,  particularly for “retrieval” and “judgement”. If you’re writing an academic paper, I’d recommend reading their book – it’s short and engaging – and from then on using their terms.

Don Dillman and his colleagues use a five-step model, with “perceive” as the first step. The current edition of their book is the 2014: Internet, phone, mail, and mixed-mode surveys: the tailored design method 2014 (Dillman, Don A., Smyth, Jolene D. and Christian, Leah Melani) . Including “perceive” aligns better with the Web Content Accessibility Guidelines WCAG 2.1 which use “perceivable, operable, understandable, robust”, but I felt that trying to wrench a model of questionnaire response to align with a model that’s about using websites in general was just a step too far. In the text, I’ve aimed to bridge the gap between “perceive” and “understand” by referring to “read and understand”.

In practice, “perceive” tends to be a topic mainly for the questionnaire builder rather than for the question designer – and I chose to separate the questionnaire building part into a separate chapter.

But I thought you might enjoy this example of bad practice at “perceive”‘ or “read”, a screenshot that didn’t make it into the book. It’s a classic where someone had decided to add colour-coding, equating bright green with “Very Easy” and bright red with “Very Difficult” – thus making it harder to read those two extremes. At least they kept the text on each option so that people who are colour blind have the text to go on, and that also helps those of us who do not automatically associate green with easy. I haven’t tested this design of a question, and I would also want to know whether people realise that the long coloured rectangles are clickable – a “respond” issue. This overlap between “read” and “respond”, or “perceive” and “response” if you prefer the noun-based models, is another example of how the challenges of creating surveys span across chapters.

Screenshot of a question:"How easy or difficult was it for you to handle your issue with xxxx?" (I've blurred the organisation's name). The five answer options range from 'Very easy' to 'Very difficult' but the questionnaire tool designer has chosen to colour-code them from dark green to dark red. The contrast on the two extremes is poor
The colour-coding on this question means that the colour contrast on the two extremes is very poor, making the text hard to perceive.

More reading on question and questionnaire design

My recommended place to start on the survey methodology literature around questions and questionnaire design is Question and Questionnaire Design Krosnick, J. A. and S. Presser (2009), especially as it’s available as a download. With over 20 pages of references – at least 200 citations – there’s enough there to to keep you happy for weeks, especially if you follow up more recent citations of the citations. 

The book that really got me interested in what survey methodologists say about questions is William Foddy’s Constructing Questions for Interviews and Questionnaires (Cambridge University Press 1993). Although these days I recommend Tourangeau Rips and Rasinski’s  The Psychology of Survey Response book ahead of this one, it’s an interesting read if you want to dive a little deeper into writing good questions.

More about the curve of prediction

Although I’ve made the point in the book that there is very little curve of prediction, Jim Lewis drew my attention to the literature on the relationship between intention and behavior (Theory of Reasoned Action; Technology Acceptance Model). He pointed out “this doesn’t mean that 100% of people who express an intention follow through, but it is far more likely that those who strongly express the intention will follow through than those that do not”. Two articles to find out more:

More about cognitive interviewing

Gordon Willis’s textbook Cognitive Interviewing: a tool for improving questionnaire design (Willis, 2005) is based on his many years of teaching cognitive interviewing in practice and is very good.  

For a more recent book with lots of tips, written by a team of survey methodologists with years of cognitive interviewing experience, try Cognitive Interviewing Practice edited by Debbie Collins (Sage, 2015). There’s an example of a cognitive interview in a formal academic setting here:  Cognitive Testing Interview Guide

Translating questionnaires

I did not discuss the topic of creating a questionnaire in one language and then recreating it in one or more other languages. I think a good place to start is this special issue of International Journal of Translation and Interpretation, Vol 10, No 2, 2018