In the chapter on Reports, I look at what you’ve learned from the data, what results you’ll put into your report and its format. Our main tentacles in this chapter are ‘the answers you’ll use’ and ‘the reason you’re doing the survey’.
The topics in the chapter are:
- Think about what you learned, numerically
- Decide what news to deliver and when
- Decide what format to use for delivery
- Choose ‘inverted pyramid’ for most presentations
- There are many ways of showing the same results
- The best insights come from using surveys alongside other methods.
The error associated with this chapter is Total Survey Error – the consequence of all the individual errors you may have made along the way.
I couldn’t give all the appropriate origins and suggestions for further reading in the chapter, so I’ll be adding them to this page in the coming weeks.
10 principles to use for a calculated statistic
If you prefer to have a checklist for exactly what to do to report statistics accurately, then this one from Steel, Liermann et al (Beyond Calculations: A Course in Statistical Thinking, American Statistician 2019) is a great help:
- Plot your data—early and often.
- Understand your dataset as one of many possible sets of data that could have been observed.
- Understand the context of your dataset—what is the background science and how were measurements taken.
- Be thoughtful in choosing summary metrics.
- Decide early which parts of your analysis are exploratory versus confirmatory and pre-register your hypotheses in your own mind.
- If you are going to use p-values, which can be useful summaries when testing hypotheses, follow these principles:
- Report estimates and confidence intervals;
- Report the number of tests you conduct (formal and informal);
- Interpret the p-value in light of your sample size (and power);
- Don’t use p-values to claim that the null hypothesis of no difference is true;
- Consider the p-value as one source of support for your conclusion not the conclusion itself.
7. Compute (and display) effect sizes and confidence intervals as an alternative to or in addition to statistical testing.
8. Consider creating customized, simulation-based statistical tests for answering your specific question with your particular dataset.
9. Use simulations to understand the performance of your statistical plan on datasets like yours and to test various assumptions.
10. Read with skepticism, remembering that pattern can easily occur by chance (especially with small samples), and that unexpected results based on small sample sizes are often wrong.
Recognise what different stakeholder audiences need
In an ‘Ask UXMatters’ column on creating UX presentations, I shared my thoughts on how you might need to tailor them to different audiences: development teams and executive teams. The teams will want to know some of the same things, and some different things which I give examples of in the article. I also touched on the topic of how much evidence you will need to present, depending on your audience, the research question and your findings.
You can read the full article here: Creating presentations for stakeholders.
Further reading on presenting reports
The Craft of Scientific Presentations: Critical Steps to Succeed and Critical Errors to Avoid (Springer 2013) has plenty of useful tips and advice. Written by Michael Alley it’s the book that tells the full story of the ‘assertion-evidence’ approach to creating and delivering reports: how to build presentations on succinct message assertions supported by visual evidence.