Surveys: an interview with Gerry Gaffney

My friend and co-author of  my forms book, Forms That Work, interviews me about the surveys book I’m now working on. The original podcast is here: http://uxpod.com/surveys-an-interview-with-caroline-jarrett/ 

 

A transcript of the interview follows below.

Gerry Gaffney: This is Gerry Gaffney with the User Experience podcast. I’m very pleased to have my friend and colleague Caroline Jarrett joining me today. And Caroline, it’s a very auspicious day. I’m not sure if you’re aware but it’s two years since our book came out.

Caroline Jarrett: Oh, really?

Gerry: Yeah.

Caroline: First of all, it’s a pleasure to be here and always fun to have a chat with you, Gerry. But is it actually two years? Wow!

Gerry: Well, I’m looking here at Amazon and it says November 26th, 2008. And the book I’m referring to of course is called Forms That Work: Designing Web Forms for Usability. It was very interesting working with you on that because you love forms and I hate them.

Caroline: I think you’re right. It’s not that I love the form; it’s that I love working with the forms because I really like it when I can actually achieve a better product, and forms tend to be pretty crummy so it tends to be relatively easy working to actually get something better out of them. So it’s not the forms themselves I like so much, as the opportunities I’ve got with working with them.

Gerry: And you’re currently working on another book. Forms That Work is still out there and it’s still selling well and I think all the advice there is current because we specifically steered away from technology-specific elements at the time that it was written. But you’re now working on a book on survey design. Is that correct?

Caroline: Yes, I don’t know why it seemed like a good idea to write another book, because you’ll remember that writing the Forms book was pretty hard work over many years. So I question myself why I’m going to be writing another book. But it was simply that in terms of forms and surveys, they’re sort of close friends… they’re both essentially things that ask you questions and have spaces for typing in the answers, and also in writing the Forms book I ended up doing quite a lot of research in the survey literature.

So there’s very little to read about why do people answer questions in terms of the forms literature, simply because not that many people have written very much about forms. But surveys have been extensively researched for well on, pretty much 100 years and so, not that I’ve read anything like all of that, but having read quite a bit about surveys I’ve found that on a few internet lists and so on I was quite often answering questions. If someone asked about a survey then I might very well answer and gradually one or two people said to me you know you should think about writing a book on that and then Ginny Redish, who I’m honoured to count as my mentor, she said to me “you really ought to write a book on surveys”. And I’ve learnt over the years that when Ginny tells you to do something the answer is “Yes Ginny,” and you just knuckle down and get on with it. And so I did start to get on with it and that’s how I’ve sort of got into surveys and actually I’m really enjoying it. I’m really enjoying the process of gathering the data and getting to grips with this book.

Gerry: So tell me, what’s a survey? When should one use a survey and when should one not use a survey?

Caroline: Come on, Gerry, one question at a time please. [Laughter.]

Gerry: Oh, it’s alright. I could repeat them to you later on. So… okay what’s a survey?

Caroline: Well, what’s a survey? That’s actually not that straightforward a question, because people use the word survey for two different things. One is what the survey methodologists call “the instrument” which is to say the actual entity that you interact with, whether it’s as a telephone interview where someone’s reading out questions or, if it’s on the web, you know, the website that you type the answers in.

So that’s the thing that the methodologists call “an instrument” but you or I might call it a questionnaire or indeed a survey. And then the other way that people use the word survey is to mean the entire process of gathering data through a quantitative method which involves asking users questions and getting them to answer. So that would be, the survey would then include things like establishing the goals of the survey, deciding on your sample, creating an instrument, running pilots, analysing the results, doing something about it, so the entire process from beginning to end.

Gerry: And when is it appropriate to do a survey at all? Because we certainly see in practice that people want to do surveys for all sorts of occasionally bizarre reasons.

crowd
know you audience: image by Naparazzi, cc licence

Caroline: Generally most surveys are pretty ill-conceived, I’m afraid. You know, the idea is, oh we’ll just knock up a few questions and stick them out on the internet and that will give us our answers. And we all see those sorts of surveys every day I should think, and they tend to be pretty poor. So generally a survey is most appropriate when you already know your users extremely well and you’ve pretty much maxed out on the amount of face-to-face or interview type research you can do, and you understand users’ motivations, you understand what they’re about, you understand your website or whatever product it is you’re working on, but what you really need to do is get some hard numbers. So you know that perhaps there are five really crucial tasks for your website and you want to know how many people want to do each of those five perhaps.

So surveys are really good for how many and they’re much less good for why. Indeed if people say, well, we don’t have time to talk to our users [and] we’ll do a survey instead, that’s pretty much doomed.

Gerry: Just to reiterate, before we do a survey we should already know our users quite well and we should have a fairly well bounded set of things we’re trying to find out, is that right?

Caroline: That’s the ideal. So this is one of the challenges for me in the book, which is there’s really an ideal survey process which says you think really carefully about what you want to ask, you already know your users really well, or you go and find out about them. You do tons of different types of testing, and so on and so forth. And it’s a long, complicated process, but you can get really good statistically robust results out of it which you could indeed make major business decisions on.

But the other type of survey is just the realism survey, where you’re going to have to do a survey whether you like it or not and so then it’s about trying to extract the best value from the money you’re going to spend whether you want to or not. And so one of the challenges I’m working on at the moment in thinking about the structure of the book is how to make it useful both for the people who are trying to rescue a survey that’s going to happen anyway and the people who are in a really good place with their user research and the survey is really going to add value to something that they’re already doing very well.

Gerry: So Caroline, once you decide that you do want to do a survey, how do we go about choosing our respondents or our audience and recruiting them, and what sort of things do we have to watch out for?

Caroline: Oh, look this is one of the most interesting questions that I’m grappling with at the moment in writing the book, which is to say… There is what… in my mind I would call classic survey methodology here, and then there’s the every day reality. And in classic survey methodology what you do is you determine what population you want to sample and then you establish some kind of sampling frame which is a way of getting at the population, and you do all sorts of elaborate things that end up meaning you have a random sample, you know exactly who you’ve sent the survey to, you monitor the response rate and you do various things to try and get the best possible response rate from your sample as you can. And then the statistics then really work for you, like all the statistical analyses is really based on getting a random sample and knowing what your response rate is and then you can get robust statistical data from really small numbers. I mean the order of 500 responses would be more than adequate from a random population to give you very, very good robust statistics.

Gerry: And I might point listeners at a recent podcast, interview with Bill Albert when he spoke about statistical analysis for remote usability testing. Do you know Bill? Have you looked at his work at all, Caroline?

Caroline: Oh yeah, and I can’t recommend that book too highly. I was very privileged and I had the opportunity to read it when it was in manuscript and it was one of the easiest times I’ve had with any book, because I do get to read some books and manuscripts and their one I just read and went, oh this was great. So and I’ve enjoyed reading it again so I totally recommend that books. It’s excellent. There’s that’s classic sampling.

Now coming back to my dilemma and book what happens in reality is that an inordinate number of surveys are actually sent on what I call “send and hope” basis. So you send it out to anyone you can think of and you hope you get some responses back. So practically any list that we subscribe to will have a request to fill in a survey about once a week even, perhaps once a month: from a PhD student will say, you know, please fill in this survey and tell all your friends and relatives about it. So that’s classic “send and hope” sample where you’ve got no idea who it’s gone to. You pretty much know it’s got some built-in bias because of the route that it’s got to you and yet this is a very heavily used method and indeed practitioners whom I respect and have a lot of time for are using “send and hope” sampling very successfully although it really shouldn’t work. So that’s something I’m grappling with at the moment is how best to present those few very different sort of scientific probabilistic sampling and “send and hope” sampling.

Gerry: And obviously that’s not something we’re going to resolves in a brief conversation today but it gives us a flavour of the sorts of things that you’ll be covering off in the book. Can I side-track you…

Caroline: That sounded as though you thought blimey, that’s a boring question.

Gerry: No, no, no. [Laughter.]

Caroline: What I would really like to do though is to say if you don’t feel, if people perhaps listening to this don’t feel utterly bored by that question and have any insights into it, I’d be thrilled if they’d get in touch with me and discuss it with me and either off-line or I’ve got a blog which I haven’t really got started yet but I soon will. So I’ve got a blog on the Rosenfeld Media site which is a place where we can have conversations in public about these sorts of things, even if it’s just to say you know Caroline, I’m just not interested in that topic, it sounds really boring, leave it aside.

Gerry: Caroline, I’m going to ask you a really specific question; how many points should there be on a Likert scale?

Caroline: Well that is actually the bit of book that I started with because I thought it would be relatively easy, and I thought, okay this is a question I know we need to answer and so I went out on the internet and I hit Google Scholar and all that and found a few references and I was very lucky to come across a paper by Krosnick and somebody else, I forget but I’ll let you know [Gerry’s note: Krosnick & Presser’s Question and Questionnaire Design, PDF, 360 KB] , which was the literature review. I thought okay I’ll just look at the literature review on that particular point and there were 87 papers mentioned. So I’m still battling through them.

But [it’s] one of the most researched topics in survey methodology starting with Mr Likert himself in 1932, and the answer is that probably it’s best to go with five or seven points if it’s something that can go through negative to positive. If it’s something that can go from zero to really positive then you can choose the number of points you like but four, five or six seem to be popular choices.

It seems that people tend to only use five points but they may quite like to sit the five points they use into a certain point scale. I mean respondents when they’re answering, no matter how many points you give them, they tend to confine themselves to five of those points.

Gerry: So listeners, pick five and that’s a good answer, yeah?

Caroline: Well it works for me.

Gerry: Okay, and no more prevarication. I love the ones you get in hotels where they ask you to rate the service and the scales are “poor”, “fair”, “good”, “excellent”.

Caroline: And it’s interesting because I think there might be a cultural thing going on there. For Brits, Irish, Australians, “fair” is regarded as quite, as being an adequate answer not negative. But I think some people regard “fair” as being quite negative so I’m still working on that one.

Gerry: You talk in the book I know about how the meanings of things change in context. Do you want to talk a little bit about that?

 Caroline: Well that’s one of our favourite things, isn’t it? And in terms of surveys one of the best stories I heard was someone wanted to do a survey about how impoverished households were, and they used a survey from a western country, let’s say the UK, and one of the questions was “do you have access to running water?” And of course to me, and I suspect to you, the concept of “running water” means water coming out of a tap in the house which you can turn on and off, and when they tested the survey in some parts of Africa it turned out that the people in that rather impoverished part of Africa interpreted “running water” as being a river.

So it shows how the meaning changes in the context, in that case it was a context of very different concepts of what “running water” might mean.

Gerry: Now, Caroline, how do you maximise the number of people who will respond? Do you give them money or incentives or… entries in prize draws or what?

Caroline: Well, the conventional wisdom from the survey methodologists is that prize draws don’t make any difference and there’s some very good repeatable from the era of postal surveys whereby if you guarantee ten dollars to be sent on the return of the mail survey it makes like one percentage point difference. If you have a prize draw it doesn’t make any percentage point difference and if you enclose a dollar bill in the envelope with the mail survey it makes a definite difference. And this is the basis of Don Dillman’s Social Exchange Theory, which is to say by giving people the dollar bill, a lot of people will be quite cynical, oh I’ll just pocket the dollar bill and throw the rest of the thing in the recycling, but actually a lot of people aren’t like that. A lot of people think, oh well they trusted me with the dollar bill, I’ll fill in the survey in exchange for that.

So it’s all about immediate perceived reward compared to the amount of effort, you know, the dollar bill won’t help you if the survey’s absolutely frighteningly massive.

Gerry: Many of our listeners will already be doing surveys of course and you’ve already castigated them for doing it wrong.

Caroline: Have I? [Laughter.]

Gerry: No, not at all. But what sort of incentives can you give online? If I’m working for a small… organisation, how can I get the same effect?

Caroline: One of the most interesting things you can offer people is to see the results. Often people fill in the survey because they’re interested in the topic and if you can give them scientific results then that’s a really powerful, we’ll trust you, we’ll send you the results and it’s effectively at no cost to you. Another thing that can work is to provide something that’s free but saves some effort. So Rob Burnside, who was at the Australian Bureau of Statistics. I think he may have retired now, I’m not quite sure, but they did a survey and [as] a government organisation they couldn’t provide any financial incentive other than in some cases it’s a legal obligation to fill the surveys out, but this was an example of one that wasn’t a legal obligation and they provided the URL of an interesting report which was something available entirely for free anyway. So people could in theory have gone out and found it for free, but they knew it would be interesting to that group of respondents and they’d saved them the effort and indeed showed them some respect and trust by saving them some effort.

I’ve yet to find people who have figured out a way of doing the dollar bill in the envelope thing on the internet, so that’s something I’m looking forward to seeing how people are actually handling incentives.

Gerry: Caroline, I was very interested to see in the slides that you sent me that you’re using Wordle to get … well, tell us what you’re using Wordle for I guess.

Caroline: Yeah, I’ve tried Wordle and it sometimes works really well and sometimes doesn’t. I was doing a survey recently, working with the Open University, and Anne Jelfs is one of the researchers there working in the Institute for Educational Technology. She suggested a really simple idea which I think is terrific and I’d like to recommend to people. So we were actually surveying students, finding out whether students were on Facebook and whether they were using Facebook in connection with their studies at Open University, and she suggested that we have a couple of questions, one of which would say please give us three words about Facebook and you simply have to write three words, and please give us three words that describe the Open University for you and similarly you had to write three words.

And it turned out that it was quite easy to separate Facebook rejecters from Facebook acceptors when I was analysing the data, and so from there I was able to do a Wordle of the words that had been used by Facebook acceptors compared to the Wordle of words that had been used by Facebook rejecters. And they’re very, very different you know. So Facebook acceptors tend to say things about “social” and “enjoyment” and so on, and Facebook rejecters tend to say, have words like “privacy” and a whole bunch of different things. And then there’s some overlap so the, both groups actually use “time consuming” a lot. So those made great Wordles and then I thought, well, this is great so let me try the same Wordle, and I happened to have access to a survey that the Usability Professionals’ Association did a few years ago on the topic of certification, and I was part of a team that developed and analysed that survey. So I went back to that old data and tried a Wordle on it again and it didn’t work. It really was both people who were in favour of certification and people who were very much against it; both groups practically used the same words. So, you know, it didn’t show anything and so it’s one of these things where it’s almost, it’s free you know, it would take you a minute to throw some data into the Wordle and see what comes out and if it’s going to give you some insight or not.

Gerry: Yeah, it’s interesting because, you know, I came across it with some work I do for the Department of Education New South Wales. James Hunter was using it there and our old friend Faruk Avdi looked at one of the Wordles and, he’d be much too polite to say it, but his eyes sort of said, or his demeanour said “that’s bullshit”, well to me anyway [laughter] and I think it can be. And the one you mention, the UPA, I’m looking at that now and you would look at that and think, well that tells me precisely nothing. But we should mention that what Wordle does is basically it generates like a tag cloud based on the vocabulary that you pour into it.

Caroline: It’s a fun form that’s very visual and appeals to some people but like you say, the examples I gave in that tutorial showed two which are quite usefully contrasting and two which are indistinguishable.

Gerry: Caroline, do you follow Gerry McGovern’s work at all? And I know for example that he does quite a bit of work for Rolf Mohlich who you know quite well I think.

Caroline: Oh, yeah, yeah.

Gerry: But Gerry has come out with this, you know, “customer carewords” thing and I was thinking about you when I read his most recent newsletter (and he is a former interviewee on the podcast as well) but he talks about giving people a list of 100 possible tasks and getting them to vote for the ones that are most relevant to them. So I guess in a way it’s a survey technique but 100 seems like an awful lot.

Caroline: Oh it definitely does. I think you’d get lots of primacy and recency effects. He must be doing some sort of randomisation so that people… otherwise the first few would be the ones people vote for, otherwise they’d get bored and tired working down that list, I would think.

Gerry: Well he says, and I’ll forward you … his latest newsletter when we finish but, I’m paraphrasing here obviously, but he says the experts tell him it shouldn’t work but that all his data over many, many years shows that it does work. And he says, one of his colleagues explained it by it being some sort of cocktail party effect whereby the relevant stuff sort of jumps out at you, and he reckons it’s worked very well for him.

Caroline: You know that’s really fascinating because I’ve got a great deal of time for Gerry McGovern and in fact the, I thought the podcast you did with him was terrific and very much one that people should listen to.

Gerry: Yeah, I loved his enthusiasm Caroline, you know when he said that the reason he got interested in the whole internet stuff was that when he was a kid he loved Western movies and he said if he ever saw the wagons going west he was going to hop on them, and when the internet came about that’s what he saw and he hopped on that wagon. I thought it was a lovely analogy.

Caroline: He writes really well, and he’s got a lot of insight so, you know, to me this is really part of the excitement of doing the book is that because I’m going out and trying to find as much about what people are really doing in practice, I’m finding that what people do and what works for them isn’t necessarily working at all towards what the survey methodologists would say is best practice, and that could be for a lot of reasons. I don’t doubt for a minute that the survey methodologists know what they’re talking about but they’re working in a different domain so, you know, the sort of things that you might need to do for creating highly robust surveys on which government policy might be based might be different from what we need as user experience practitioners. You know, so someone like Gerry if he’s got a technique and it works for him, well that’s definitely something that I’ll enjoy looking into and finding out about.

 Gerry: But if we can backtrack a little bit, we’ve considered … what we should know before we create a survey, we’ve talked about some of the mechanics of thinking about how to engage people and perhaps how to, to use a horrible word, incentivise.

Caroline: It’s a word I try and avoid you know it’s like horrible, isn’t it?

Gerry: It’s like leverage.

Caroline: By incentivise do you mean persuade or reward?

Gerry: Pay, I think. [Laughs.]

Caroline: Pay, oh pay, okay, yeah, carry on then.

Gerry: Okay, so then we get all our data back, and we think oh what’s this telling us? Any pointers for people should be thinking about at that point?

Caroline: You know any survey I’ve ever done, what you think at that point is oh no I didn’t ask my questions quite correctly. [Laughter.] I wish I’d done more testing of the survey. But, leaving that aside, that’s another thing that I’ve been really asking people, you know, how do you analyse your data and what do you do? And in fact Bill Albert’s book is very good on this… For me personally there’s two tried and trusted tools which are my way to go and one of them is cluster analysis, otherwise known as getting your sticky notes out and sticking them in groups on the wall. And the other one is the Excel pivot table which is putting everything into great Excel spreadsheets and then shoving in all the pivot tables to see what drops out. I can’t claim that Excel Pivot Tables are the easiest thing in Excel to work with, but a little bit of persistence and getting to grips with them and you can often see quite a lot of data. And then, for me, I’m borderline dyscalculia, I mean I have real problems with actual numbers. So I have to put everything into charts to see the relationships between them.

So pour it into an Excel spreadsheet, throw at it a few pivot tables and then start drawing a chart to see, well, to have this compared to that.

Gerry: Have you used tools like nVivo or… I guess they’re more for freeform data aren’t they?

 Caroline: You know I haven’t at all so that’s another thing I’d like to find out about, and no-one I’ve talked to so far have mentioned them either so that will be another area that I need to do a bit more research and find out what people are using. Have you used them?

Gerry: Well I’ve used nVivo. To be honest I used it because I had a client who said we’re scientists and we don’t, you know, we don’t respect sticky notes because there’s no apparent methodology but you put everything into nVivo and code it, we respect it then.

Caroline: Oh isn’t nVivo just like an electronic way of doing the sticky notes?

Gerry: It is but it costs a few grand and it has got credibility built in as a result of that, I think.

Caroline: There you go then. Excellent. Well sometime later and perhaps not right in the middle of a podcast I’ll maybe get, ask you again or perhaps I should do a reverse podcast with you about your experience with nVivo and what you thought of it.

Gerry: Yeah, I was actually quite impressed. I’m not sure the extent to which it’s applicable for the sort of work that I do but it’s definitely powerful and there’s a couple of tools in that space. I’m by no means an expert in that area, I have to say totally upfront, but it’s a whole interesting area.

Caroline: I think we also need to be practical. You know, it’s been thoroughly beaten into me by a few people who are helping me with the book that this should be a short book. It should be under 200 pages. It should be the least that a usability practitioner or a user experience practitioner needs to know in order to get the best out of the survey.

Gerry: Sounds like you’ve been talking to Steve Krug and Giles Colborne, have you?

Caroline: [Laughter.] Just a bit. Yeah, as you can imagine “the least you need to know” is Steve’s message to me and the world.

Gerry: So, presenting the results, so once you’ve done your survey and you’ve got the data back, you’ve done your sticky note analysis or you’ve put it in your diagram or your nVivo or whatever it is, any tips on presenting the results of surveys or is this getting into territory that’s covered well by many, many others?

Caroline: I think it is covered well by many others but I also think that it’s worth repeating, you know, two or three of the main things. And another really interesting tip that I was given at a UPA conference this year by a woman, whose name I have now forgotten but I am definitely going to look it up again [Caroline did look it up: Natalie Webb of Matau Ltd], was that you might think that writing the presentation is the last thing that you do, but she actually said that she recommends writing the presentation is the first thing you do.

So you start by writing the presentation and preferably writing two or three presentations. So one presentation… is where the survey more or less confirms what you expected, another one is where it contradicts what you expected and then another one where you’re not quite sure. And if you have to think about well what would I put into my presentation that could really help guide what questions do I want to ask, you know? Because you’ve got to ask the question or to have the data in order to write the presentation so you start with the presentation. So, yeah, we will definitely need to report on this aspect of the problem, you better make sure you ask questions about it otherwise you’re going to have trouble.

Gerry: That’s brilliant.

Caroline: Yeah.

Gerry: Okay, Caroline, there’s one thing I would like to get into and I know it’s a little bit technical but I think it’s worth doing. You talk about four types of survey error, and they are sampling, non-response, coverage and measurement. And personally I was a bit confused between the difference between sampling and coverage but could you maybe walk us through each of those four types of error and what they mean and how to avoid them or minimise them?

Caroline: Okay, I’ll give it a go. What were the four types again that I actually mentioned, not having my presentation in front of me? I’ve forgotten.

 Gerry: You know when I interviewed Jesse James Garrett he forgot what the elements of user experience were. [Laughter.]

Caroline: Well normally I have a slide up that tells me…

Gerry: Of course you do. Yeah, okay…

Caroline: Oh yeah, I can remember! I can remember! I’ve got it; coverage, sampling, non-response and measurement.

Gerry: Yes, although you start with sampling on the slide I’m looking at.

Caroline: Right, sampling. So sampling error is, many of us have heard about statistical significance. So when you take a random sample you get some sampling error simply, if you have a random sample of variable data you get some sampling error every time you take a random sample. So you wouldn’t expect to get back a precise number every time. So supposing you’ve got, let’s say, something like average height of Australians, if you take a sample and measure the height of those people, the mean height that you get, the average or the arithmetic mean height you get will vary a bit from sample to sample. That’s your sampling error. And it’s from the concepts of sampling error that you get ideas like people may have heard of confidence intervals, well they all arise out of the mathematics and statistics of random sampling.

Now most people think that’s it, you know, they don’t think about sampling error and they only worry about statistical significance so you’ll hear people say well this small sample can’t be statistically significant and the answer is well actually it can but that’s a different matter.

Now, coverage error happens when the group that you sample isn’t representative of the population. So, supposing I went out and did my sample of heights of Australians just in the central business district in Melbourne. Now it could be that people in Melbourne are absolutely like the rest Australians or it could be that I’ve under represented some other type of people. Or putting it another way, supposing I went to, Melbourne is a highly multi-cultural city, let’s say I went to a part of the city which is predominantly Japanese. Well, Japanese people tend to be a bit smaller classically. And in another area which is all Somali they might be particularly tall. And so if I just went to those areas I’d have coverage error where I’d got particularly small people and particularly tall people but not a general mixture. So that’s a coverage error. And a typical coverage error in our web world might be where you stick the survey up on your website and it is therefore only seen by people who are currently visiting your website, which might be exactly what you want, but if you’re looking at extending the website to new audiences you’ve got a coverage error because you haven’t actually addressed people who are not yet visiting your website but who you want to visit your website. So that’s a coverage error.

The third type of error is non-response error and that happens when the people who do respond are different to the people who don’t respond. So you might have an impeccably drawn random sample but actually the only people who respond are people who are particularly hostile to your organisation, for example, and everybody else doesn’t bother responding. Well then you’ll get a very negative picture which is actually unfair.

You can’t always tell what type of errors you’ve got until you actually analyse it. And one of the things that particularly worries me about, people often get very, very low response rates in their surveys, you know one or two per cent of people replied and so that gives you lots and lots of possibilities of non-response error. You know that large numbers of people haven’t replied, they’ve been zillions of reasons why they haven’t and there could be lots and lots of reasons why those people are different to the people who did reply. So those are all kind of very much statistical.

And then measurement error, that’s the really easy one. That’s asking the wrong question.

Gerry: Give us an example.

Caroline: Well, a classic measurement error is to ask people to predict their future behaviour, and people are notoriously bad about that. You know if we add this feature to our website, would you use it? And the reason that’s the wrong question is that will be provoking measurement error is that people typically say yes we’ll use any feature that’s on offer. But they typically, they don’t really know what they’ll use until they actually come to it. People if offered something will usually say yes to it. But that doesn’t mean they’ll really use it in practice.

Gerry: Okay so having scared people thoroughly off doing surveys at all…

Caroline: [Laughs.] Well, one of the things I’m grappling with at the moment is that I dearly wanted to include a chapter in the book saying; “Can I talk you out of a survey?” But I was told that was too negative so I’m grappling with that point, and I have heard so many interesting stories of people using surveys very well to get great stuff that I’m becoming to come round to the idea that they might be a good idea.

Gerry: But I must say that personally I’ve seen surveys that I would look at in its initial stages and say this is ill conceived at the least and then, you know, at the end you get data out of it. It’s kind of like I’m being forced to run a focus group tomorrow because they won’t let us actually do some proper user research and, you know, we’ll get something useful out of it.

Caroline: Right, right and I think that’s one of the most important areas so not only where the survey is the considered best method for this particular problem that you’re grappling with at the moment but you’re going to have to do a survey whether you like it or not: how can you actually retrieve value from that? That’s an area I’m really hoping to cover in the book.

Gerry: I don’t often do plugs in the User Experience podcast but I have to mention that UX Hong Kong is coming up, User Experience Hong Kong in February, in Hong Kong obviously, and you’re not going along to that Caroline? You’re not talking at it?

Caroline: Unfortunately not. I’d love to. I mean, the idea of escaping from a British winter in February and going to Hong Kong which I think is a nice time of year, isn’t it? February in Hong Kong?

Gerry: Well, you know the shops are open, that’s all Gina cares about.

Caroline: [Laughs.] The shops are open and you know it’s more of a sub-tropical climate and it will be the kind of winter months so it should be very nice, and I believe there’s a really great line-up going to it so I’d love to but unfortunately I can’t.

Gerry: Caroline Jarrett, as always a very great pleasure to talk to you and I’m sorry that you’re like only down the road in Melbourne while I’m here in Sydney and that we’re stuck on this phone line instead of catching up in person.

Caroline: Oh well, we’ll have to do it some other time but it’s always been a pleasure to have a chat with you Gerry and thanks very much for letting me join you on the User Experience podcast.