Research heresies
This is a lightly-edited transcript of a talk I gave at UX Brighton 2018. The theme of the conference was Advancing Research.
I'm going to talk to you today about research heresies - three ways to think about user research to overcome unhelpful beliefs that get in the way of doing a great job.
Before I get started I want to talk briefly about who I am and where I'm coming from. So you know how to judge what I'm going to say.
The first thing is that I'm pretty much obsessed with impact. I've been quite disappointed at the amount of impact that my research and design has had throughout my career. It irks me. This means that I'm more interested in impact than I am about improving the research we do. I think our research is actually pretty strong. Where we should we put our effort is thinking about how we can be more impactful.
The second thing is that I'm a product team person. I'm a user researcher who likes to work in multidisciplinary teams. I work with other researchers who work in multidisciplinary teams. So that's a focus for how I'm going to talk about stuff today.
The third thing is that I've been leading researchers for a few years. Kate Towsey was talking earlier about research leadership and I'm just going to own this even though it makes me uncomfortable. I'm a research leader. I spend a lot of time thinking about how to train and coach and support user researchers so that they can do a better job.
Speaking of that, these are some of the wonderful user researchers that I've had the pleasure of working with over the past four years.
My job with them is mostly trying to explain what the job of a user researcher is. To do this, I'm not leaning over their shoulder to help them write discussion guides, moderate sessions, or do the analysis. Partly because there are too many of them. But also because that's not a good way to help them learn. Instead, I'm explaining what the point of research is so they can then take that away themselves and flourish on their own. Learn on their own and get better on their own.
The trouble with trying to explain stuff to people is that it exposes how you don't actually know what you thought you knew. Over the last three years these people have challenged me in ways that made me realise that there's a bunch of stuff I thought I knew about research that turns out not to be true.
And so I’ve changed the way I talk about user research. That's what I want to share with you today. Three ways - three research heresies - that I’ve used to explain to these people what it is to do their jobs.
This whole idea of finding the explanations leads me on to this. The Beginning of Infinity is a book by David Deutsch, a theoretical physicist.
He's got this theory that human knowledge doesn't advance by data and facts and evidence and what he would call empiricism. Instead, human knowledge advances through us coming up with better explanations to make sense of the data, the facts and knowledge.
And that the act of explaining things is fundamentally a creative act, not an analytical one. I am going to try and do some of that for us today.
The first unhelpful belief that we hold as user researchers is that ‘user needs’ is an important concept for training user researchers.
I think this is unhelpful. I do not agree with this.
I get where it comes from. One of the places is the first government design principle.
‘Start with user needs’ has been hugely important for government. I love this principle. It’s brought a lot of user centered design into government. It has been a rallying call for researchers and designers to come and work in government. Me included. I work in government because of this.
But when I work with researchers on our teams I've come to the conclusion that ‘user needs’ is actually a harmful concept for training researchers. Or teaching researchers what the job of a researcher is.
I know this is quite a big statement. So I want to explain myself.
This is me joining GDS and leaving cxpartners, a design agency I worked for. You can see I'm excited because there's a nice exclamation mark!
What you can’t see is I'm actually nervous and slightly scared. The reason I'm nervous and scared is that - in the job description, and in the job advert, and on the service manual, and even in my objectives - there’s this concept of ‘user needs’.
The truth is that I don't understand how to relate ‘user needs’ to my work. Even though I've been a user centered designer and a user researcher for five years, I still can't connect these things. I feel like this is my personal failing because everyone's talking about this thing called ‘user needs’ and I'm like, well, it must be me that doesn’t get it.
So I hide it like any good imposter does.
Gradually as I’m working at GDS I realise that it's not just me. Lots of other researchers - some very senior researchers on my team - are also really struggling with what ‘user needs’ means. It's not just them either. It's also designers and product managers.
So this concept of ‘user needs’ is not as obvious and straightforward as it looks. Even at GDS where we talk about it all the time.
This intrigues me. I like to find explanations for things. So I'm digging into what's going on. And I come across two interpretations.
The first interpretation is one that was held by a lot of the senior researchers that came before me at GDS. People like Leisa Reichelt, John Waterworth, Pete Gale, Caroline Jarrett and Tara Land.
These people all see ‘user needs’ as a shorthand. A catch-all phrase for a whole bunch of things that are the legitimate object of user research:
What people are trying to do (goals)
What they’re actually doing (tasks)
Where they are (contexts)
How they act (behaviours)
How they feel (emotions)
What their mental models are (beliefs)
Where their pain points are (problems)
What capacities they have (capabilities)
I strongly agree with this. This is rich stuff that can help us design better services for people. These are the legitimate objects of user research.
Thinking of ‘user needs’ as a shorthand for this stuff is fine for people like Leisa and the others. They’ve been working in the industry for decades. A lot of them were at Flow, a seminal user-centred design agency, back in the early 2000s. They know that their job is to find out about all of these things.
The trouble is there's a whole new generation of researchers and designers coming into government. They didn’t work at Flow back in the early 2000s. They're not clear that their job is to do these things.
They infer a totally different meaning.
These new researchers and designers come into government and they see things like this sticker ‘What is the user need?’. This sticker is all over government. Especially in the research and design community.
New researchers look at this sticker and infer that there's some kind of object out there called a ‘user need’. Almost like a magical object. And that our job is to find these objects. Like they are specimens and we're trying to hunt them down and put them in collections. Like butterflies with pins through them in glass displays.
And then some secondary weird behaviors emerge.
One is that they think, OK, now we’re collecting these objects we need a way to make them feel uniform. A bit like how Linnaeus came up with his binomial classification for plants and animals. This is what collecting things does to you. You yearn for a format.
So they start writing user needs in the same format as user stories. You know the format for user stories. “As a conference speaker, I need to have a provocative title, so that they will accept my talk”.
I have no problem with user stories. But it's deeply problematic when we're using the same format to describe user needs (which is the people side and relates to the problem) as well as the user stories (which is the technology side and relates to our solutions).
It’s problematic because teams end up thinking that they need a user need, written in the format of a user story, for every single design feature. I’ve seen teams chasing user needs for buttons, blocks of text or even labels for individual fields.
This is a waste of time for researchers. It’s not what we're here to do.
At the most extreme you find people creating these huge lists of user needs, written in the format of user stories, and then they start thinking they should make a database of all the user needs! Almost as if we can capture them all and then we can do any design we need to.
Design doesn't work like that.
The other problem is trying to describe the things that are the legitimate object of user research - goals, tasks, contexts, emotions, behaviours, beliefs, problems, capabilities - in the format of a user story. It doesn’t really work.
Maybe goals and tasks work. But how people think and feel, or what their mental models are, these things resist being squashed into this format. You can do it, but you lose a lot of the richness that is there if we let these things breathe on their own terms.
If we are honest, how many of these things are needs anyway? Is a belief a need? Is a context a need? Is a behaviour a need? I don’t think so.
We're doing ourselves a disservice. We're reducing the breadth and the complexity of the things that we're trying to look at by stuffing them into this rigid narrow concept of user needs in the format of user stories. It makes our approach to design a bit reductive and a bit deterministic.
So I don't talk about ‘user needs’ with our researchers any more.
Instead, I ask them what their users’ goals are, what tasks they are doing, how they are behaving, what they are feeling. All of that rich stuff. Because that stuff is a picture of humans using things that allows our product teams and our designers to come up with much better solutions.
And that's what user researchers are here to do.
Okay unhelpful belief number two. This is the idea that releasing things without user research is unacceptable. This one's a lot more personal for me because I've been saying this for my entire career.
I strongly disagree with this now.
To explain that I'm going to have to talk you through a little theory about how user research matures in an organization.
When I started off in 10 years ago this is what I would find...
We don't need to do research. We've got this. We've got the requirements. Look at them. We're going to build something. Look at it.
I would shout from the sidelines “you shouldn't have released that without research. It’s a mistake”. I’m convincing at this. So pretty quickly they said “Will, fine, let's test some things that we build.”
Now we’re in a new place. We’re testing things. Great. But pretty soon we realise testing things isn’t enough.
That massive architectural decision we spent six months building? Maybe we could have prototyped that before we built it.
This whole proposition we bet our organisation’s future on? Maybe we could have done some depth interviews to understand whether people even have this problem in the first place.
And I’m convincing at this too.
Now we get to this third place. The organisation starts wanting to use research for everything. This looks like a lovely place to be.
I got to this place at at GDS. One of the wonderful things about working at GDS is that if you're convincing as a researcher, you're going to get permission to do research right across the whole product life cycle.
There's a trap here though. And I fell straight into this trap at GDS.
The trap is that if you are used to shouting that everything needs research - and then you're given the opportunity to do research on anything - you end up trying to do research on too many things.
Not just the web pages. Also the help pages, the API documentation, the call centre, the manual you give your staff when they do inductions.
All of those things can be researched. But if you try to research them all then your quality plummets because you're stretched. You're trying to do too many things. You end up blocking your team. Your team lose their faith in you as a researcher and there’s every chance they’ll go back to thinking they don’t need to do any research.
If you’re lucky you realise you can’t research everything before this happens. You decide to research the stuff that really matters.
This is the way we should be thinking about user research in a mature organization. Releasing things without research is highly desirable a lot of the time.
It's desirable because when our teams are releasing things without research it frees our time to focus on what’s most important. Then we can use enough rigour and effort to come back with results that matter.
It means we need to be mature as individuals and not moan about the stuff that our teams are doing without research. This is not something that most researchers are very good at.
So how do we get there?
This is Katie Taylor. We used to work together at GDS. I would spend ages talking at Katie about existential topics. What is research? What is truth? What is a user need?
After a while she said. “Come on Will, it’s simple. User research is just about reducing risk. That's all it is.”
She was basically saying that to shut me up. But it’s kind of profound. This one sentence has completely changed how I think about research.
I’ll talk more about that in a second, but first a tiny diversion about why I’ve become so attached to this way of thinking about research.
When you talk about user research being something that reduces risk - rather than about user needs, rather than about improving lives, rather than about any other way we frame it - it tends to make extremely senior people take us more seriously.
For example, when I went into the Home Office there was a lot of skepticism about the research that we were doing. I was able to bring those senior stakeholders round by describing what we were doing in terms of reducing the risk that their big strategic decisions would go wrong. Talking in terms of risk is something that those people listened to quite easily. We did the same research but now they supported it.
Anyway. Back to Katie’s point.
Thinking about user research in terms of risk starts with finding out what the riskiest assumptions are.
You can look at your product. You can look at the backlog of stuff that you've got coming up to build. You can look at the roadmap for what you think you're going to do over the next year. You can even look at your whole strategy and your value proposition.
When you look in these places you can pull out the assumptions and work out which are going to screw you if they turn out to be untrue.
That's the stuff that we should be should be spending our valuable expensive human research time on. This is our way out of thinking that we need to research everything before it's released. We don’t.
The other thing that I have a hunch will help us here is Wardley maps.
A Wardley Map represents a service from top to bottom. It starts with a user need at the top and then breaks the service into a bunch of different technologies cascading down from that. That’s the top-to-bottom axis.
Then it distributes the technologies from left-to-right according to how evolved they are. On the far left is genesis where it's nerds in a room making new technology from scratch. Then it’s custom-built where an agile team builds applications that no one else is building. Then it’s products where you buy something off the shelf like Salesforce or Shopify. And on the far right it’s commodities that are basically solved and where nobody makes a profit unless they’re selling them at scale.
The key is that technologies evolve from left to right. And it’s this action on the y-axis that helps us think about where to do research.
For example, it’s not very useful doing user research on commodities. I would stick some things in here like checkouts. We know how to do checkouts. Not everyone does it right but we know how to do it. I once spent months doing user research into checkouts with zero impact because user behaviour around checkouts had stabilised. If I’d done it 10 years earlier, when checkouts were less evolved, then the research would have had impact. Researching commodities is usually a waste of time.
At the other extreme, there’s not much point doing user research when technologies are in the genesis stage. Remember, this is nerds in a room playing with bleeding edge technology and making it do cool stuff. The reason it doesn’t make sense to do user research there is that there aren’t any users there! It’s technology before an application is found.
The place we should be focusing our research is the stuff in the middle.
Custom-built applications are a phenomenal place to be doing user research because this is where you’re turning bleeding edge technology into useful stuff for humans.
Product technologies are a great place to be doing user research too because you’re taking the custom-built stuff and making it work for a much larger audience.
I haven’t got time to say any more about Wardley maps. But, given that the theme of the conference is advancing research, we could do a lot worse understanding a little more about Wardley maps.
What I'm saying is that when I talk to our researchers I make it clear that their job is not to research everything. Their job is to understand what matters and research that well. And be mature enough to let the rest go without moaning and demoralizing our teams.
The final unhelpful belief. This is the belief that our job, as user researchers, is to make clear recommendations.
I did this a lot when I worked for agencies for five years.
If you Google “usability agency” these are some of the sites on the first page. “Robust detailed recommendations”. “Actionable recommendations”. “Recommendations for improvement”.
It's fine for agencies to do this. It’s part of the business model. You’re not going to avoid telling a client what to do after they’ve spent thousands of pounds with you.
We don't do this in product teams though. User researchers don't make recommendations.
The reason that we don't make recommendations in product teams has to do with the way research relates to design.
Let's imagine that you come up with four findings. A, B, C and D.
If your job is to make recommendations you look at this and start thinking of a recommendation for each finding. Four recommendations for four findings. It seems simple and straightforward. I’ve lost count of the reports I’ve seen with matching findings and recommendations.
But does design work like this?
No. Design does not work like this.
One way design doesn’t work like this is there might be three ways to solve A.
You won’t know which one is the right way, or the best way, until you prototype and iterate. Or until you build, measure, learn. Or until you do the kinds of things we do to understand which approach works.
The point is you don’t know what to recommend until you’ve done that work. You can’t know that at the point of making the recommendation.
Another way design doesn’t work like that is that for B, C and D maybe there's one intervention X that solves all of them.
Again, you can’t know at the point of reporting the findings. It takes design work. Researchers are not the people that do design work.
That work falls to another group of specialists. Designers.
This is Stephen McCarthy. He’s a designer I used to work with at GDS. If I give Stephen a bunch of recommendations two things happen.
First, he's not going to do a great job because - as I've just talked about - I’ll have constrained his thinking in unhelpful ways.
Secondly, he’s going to dislike me because I’m stepping on his toes. Because a recommendation is nothing other than a design solution in camouflage.
If, on the other hand, I stopped one short of recommendations and spent my time thinking about the best way to explain what I've seen, the ways way users behave, or the way users think - well then I’m thinking about the explanation that helps best communicate what’s going on with our users.
Then Stephen’s going to do a great job because I’m setting him up to understand a key part of the context - the user behaviour part - in which he's doing his design work. That is the job of a user researcher.
This word explanation brings us full circle back to David Deutsch. This gets to the core of what I think researchers are trying to do.
We are trying to provide the explanations that allow the rest of our teams - product managers, designers, developers, tech ops, all the people who love solutions - to take a more informed run at their solutions. Because we know these people love solutions.
So that's the final thing that I say to researchers that I work with. I don't want to see recommendations from you.
I want you to think about the bit beforehand. The bit that's the explanation. Then I want you to use that to free your colleagues to think about solutions.
Let’s wrap this up. I said I’d talk about three useful research heresies:
User needs is a harmful concept for training user researchers
It’s confusing. It leads to bad practice. We should be more specific about what we're talking about. We should talk about things like tasks and goals and behaviours instead.Releasing things without user research is often desirable
It gives us space to work out what's important and do a good job on that. But it comes with the obligation to be mature enough not to moan about things our team do without us.User researchers don’t make recommendations
The simplest one. The one that encourages us to do a better job of explaining what we're seeing. The one that frees our designers, product managers and developers to do their own work better.
Those are my three research heresies. But I want to end with this.
We're here today to talk about advancing research. Sometimes there's a belief among researchers that advancing research is about finding more advanced research methods. That's our happy place.
I don't think advancing research is about that at all.
We already know enough about user research to come back with good results. Advancing research - for us - is about finding ways to make our researchers more powerful and to make our research findings more impactful.
Adopting these three research heresies in your work will help with that.
Thank you.
Let me know what you think on @myddelton. One thing I should say is that these three things are not absolute. There are times to talk about ‘user needs’. There are times when you shouldn’t release without user research. And there are times when user researchers do make recommendations. Nothing is ever neatly black and white. Finally, this turned out to be the last talk I ever did as a specialist user researcher. I’m now working as a product manager for Local Welcome and learning a whole new set of things about when and where to do research…more on that soon…