Error rates and data quality at Agile Manchester 2025

Caroline Jarrett talking at a conference. She is wearing a black top with large ping and blue flowers on it, and waving her ams in front of a slide asking people to chat about errors.
Photo by Imran Hussain

It was a great pleasure to run an in-person workshop on error rates and data quality at the Agile Manchester conference in May 2025.

People at the conference work for a wide variety of organisations and do quite a variety of different types of work. For example, my workshop included a user researcher, a Jira specialist, and a community management expert – my friend Imran Hussein, who also took some pictures (thanks Imran). I really enjoyed the range of perspectives and the conversations that helped to created.

We started by thinking about errors

I asked everyone to compare their thoughts about what counted as errors in the service or product that they work on, and whether they knew the error rates. Reflecting back, I wish I had asked each table to nominate someone as a note-taker and I also wished I had asked more clearly for each error to be put on a separate sticky note.

Nevertheless, there were plenty of comments. In particular, I was struck by this selection of types of errors related to a supermarket:

  • Cashier knows how many frustrated customers (error rate known to cashier, not to management)
  • Voucher code misuse (possibly fraud?) (unknown error rate)
  • Fake sign ups (error rate known previously)
  • Vouchers failing on checkout (unknown error rate)
  • Incomplete registrations (known error rate).

This illustrates to me that sometimes we know errors are out there but we can’t easily measure them. And also that some errors are easier to discover in digital services than in non-digital ones.

Attendees added a new category of errors

I asked attendees to try five categories of errors that I’d previously developed with the help of a team at HMRC. The five categories I asked them to consider are:

  1. Problems along the way, such as accidentally ordering 2 carrots when I expected to buy 2kg in a supermarket shop
  2. Wrong result, such as the supermarket sending lettuce instead of a pork pie
  3. Unnecessary action, such as calling to find out why the delivery didn’t arrive when expected
  4. Delayed-impact problem, such as using a credit card that was valid at the time of order but not when the order was filled and charged
  5. Non-uptake or over-uptake, such as deciding to use another supermarket or accidentally placing the same order twice.

Several people noted a variety of technology fails or glitches that did not easily seem to fit into any of these categories, including:

  • Technical performance of equipment
  • API errors
  • Online GP appointment, “not being able to submit information – page crashes before I can submit form”
  • 3rd party website integrated into an app goes offline.

I’ve updated my blog post on how to think about errors to include technology errors.

We mused on measurement and causes

As I’ve found in previous sessions, many of us do not know our error rates. Here are some other thoughts around errors contributed in the workshop:

  • Who defines the error? For example, does a streaming service such as Netflix care whether someone doesn’t see the video they expected?
  • What is the cause of the error? Where does the error come from?
  • If we are looking at error rates, are we back into the discussions many of us are familiar with about web site visits compared to web site visitors?
  • How would error rates work on a product with many user journeys?
  • Do errors even matter? An attendee quoted the phrase “Good enough for government work”.

I’m still brooding on these ideas and thanks for the suggestions.

Attendees shared stories of working with errors

There were lots of stories being shared on tables and I wished I could have got round all of them. These are three stories that I recall:

  • One person is working on pipe routing within factories. Unnecessary bends are errors; the aim is to have as few bends as possible. They have noticed that AI tends to add in extra ‘twiddly bits’, so to get the best results they now accept that the AI program is best limited to the simpler 90% with humans checking and resolving the remaining 10%.
  • Booking.com is a sponsor of the conference. A team member from Booking.com noted that they are actively working on close observation to improve discoverability and management of error rates – especially of technology problems at some of the third-party suppliers to contribute to their overall user journey.
  • Another person worked on the ‘black boxes’ that some car drivers include in their cars in return for a lower insurance premium. They noted that these boxes generate a lot of data, but it turned out that the most useful data item for calculating the correct premium was the simplest: whether or not the insured person agreed to have a black box in their car.

Most of us are not familiar with the Data Quality Framework

I asked attendees to share their experiences, if any, with the UK Government Data Quality Framework, which I only learned about at a session at GovCamp in January 2025.

In some ways, it was reassuring to me that the framework was new to most people in the workshop, too. We were quite limited for time at this stage and I think we all felt that we needed more than a couple of minutes to get to grips with it.

Only one person had tried using it, and they mentioned that they felt it was “best for PII data” (personally identifiable information).

Other comments:

  • Any other data quality frameworks?
  • What does good look like today? Good to get a level for data sharing
  • What have you learned from the data quality framework? How can we learn from you?

So now I’m trying to learn more about who is using the framework and why.

Feel welcome to use my slides

The slides I used are Creative Commons licensed so if you would like to run a workshop like this for your own organisation, please go ahead.

Or even better: contact me and I’ll run one for you. If you are a government organisation, a non-profit, or ask me nicely, then I’ll do it for free as part of my research on my 2025 topic error rates and data quality.