Today, let’s dig into this article, which describes how the National Portrait Gallery in London ran into trouble when the equipment it used to count visitors failed. It’s a good example of a situation where user research could have helped save jobs and prevent embarrassment for a museum. (Thanks for forwarding the link, Isabel!)

Summary: The National Portrait Gallery’s equipment failed to count 600,000 visitors in one year, counting 1.1 million visitors instead of the 1.7 million who actually visited. Before identifying the error, the museum eliminated 24 positions — a 7% reduction in its overall workforce.

Now, consider this:

The gallery had originally blamed a combination of factors for the fall in visitor numbers in its annual review, including the cost of living and security concerns following a spate of terrorist attacks in London.

I just rewatched No Country for Old Men, so this is when I hear Tommy Lee Jones’s voice:

With all due respect …

There was no significant drop in visitation. Instead of looking at alternative sources of data or conducting any research as to why there seemed to be a decline in attendance, it seems the museum took their best guess when writing their annual review.

It’s easy to wind up relying heavily on quantitative data because numbers feel so truthy. Things like visitor counting machines and satisfaction surveys are relatively straightforward to deal with because their results are easier to assimilate and less open to interpretation. There’s no need to search for patterns when you’re dealing with ones and zeros, right? When the software can generate the bar graph for us, all we have to do is forward the PDF to the boss or client. I know how appealing that can be.

But this story shows how reliance on numbers and a single source of truth is risky.

How could user research have helped?

First, the National Portrait Gallery is a free museum. Of course, it costs something to travel to the museum, but if high cost of living was really contributing to a 35% drop, wouldn’t that mean that every other leisure destination in the area that had an admission fee would be seeing a similar, if not more significant, drop in attendance?

To find that out, people would have to inquire with other institutions and businesses like theaters, cinemas, private museums, and concert halls. They would have to stop counting and start talking with people.

And if fear of terrorism were preventing people from visiting, could they verify that by checking attendance from school groups? If people were not visiting the museum out of fear of a terrorist attack, wouldn’t there be at least some pressure on administrators from parents to move field trips to destinations other than central London?

Again, to find that out, you might have to talk with people.

Some other things they could do to see what might be motivating the apparent drop in attendance:

  • Contact members to find out if they’re spending their time in different ways. (You don’t need to talk with hundreds of people to draw insights in qualitative research.)
  • Study membership enrollment and renewal patterns — wouldn’t there be some corresponding drop there as well?
  • Listen in on call center and visitor services activity to identify any unusual patterns of complaints — or at least see if the volume in inquiries had dropped.
  • Look at digital analytics to see if there were any parallel trends in online behavior.
  • Check retail sales for spending behavior. (The museum is free, but economic hardship was responsible for a drop in attendance, that could be echoed in retail spending.)

If they had checked some of those other data sources, they could have identified the decline in visitation as the anomaly that it was, and maybe they wouldn’t have written an annual report about terrorism and economic hardship.

Maybe they didn’t think to look at those other sources, or maybe they did investigate further and chose to ignore what they found.

Or maybe they decided against digging deeper because they thought user research would be too expensive.

That’s understandable.

After all, you’ve invested in this technology, right? It’s supposed to solve these problems. How can you justify investing in research when you worked hard to vet and get buy-in for this counting equipment? If you advocate for research, isn’t that an admission that you may have made a mistake?

But the price wound up being 24 jobs and who knows how much damage to the museum’s reputation. I mean, how much faith would you have in their future annual reports?

Really?

Thanks for reading,
Kyle

PS. Thanks to everyone who wrote back in response to Wednesday’s letter. Sounds like there may be more people out there than I realized who believe that user research can provide value to museums, which is very encouraging.

 
How helpful was this letter?
1
2
3
4
5
Not at all helpful
Sort of helpful
Super helpful