Good Product Decisions vs Cognitive Biases

Talk

Good Product Decisions vs Cognitive Biases

UXDX EMEA 2022

The way we interpret our interactions with products and people around us is impacted by biases in every single moment. This affects our lives, our decisions and the products we design and build.

This talk will take you through the following steps to challenge your inherent bias:
-Learn to 'live your biases' through real-life situation exercises
-Become aware of how biases affect how you ask questions, listen and take product and design decisions.
-Gain insight from stories and tips on how to be a more empathetic and objective observer and a better decision maker.

**Romina Chitic **
188 What's up with this number? Why don't we put it up there? This is a number of biases referred to in the biases Codex. Here's a picture of it. And as we keep learning more about these we can keep adding to this list. It's important to know however that not all biases are bad, biases are shortcuts; they're ways that our brains have evolved in order to make big quicker decisions. And what we're going to look at today is some stories surrounding our biases and how we can better balance them out and build a better product by learning how to do that.
**Alex Lung **
As a disclaimer, this is really hard. It's everything that has to do with us being more aware of ourselves and our reactions, thoughts, and emotions. It's a really really hard thing to do. But it's also a very valuable thing to do especially when working in product and design. So in order to really get to know our users really well the best thing is of course to go talk to them and observe them and really get to know them. But it's also during these interactions and is allowed to do user interviews as well that we see a lot of the biases kick in. And one of the biases that we see happening a lot in user interviews it's the framing effect. The framing effect is based on the fact that the framing of the required inquiries can influence our responses. Basically by the way we ask the questions we're going to lead the user into answering in a certain way. So simply put as human beings we do not make choices in isolation. We are highly dependent on the way that the question is asked or the information is presented to us. And here's an example. I'm sure you've all heard that it's kind of an example of the Do you like this product? And it's one of these questions where you can pretty safely assume that the answer will be yes. But most of the time it will be a yes that is pretty light without taking all the different aspects into account. For example, a couple of days ago I was passing by a perfume shop and someone made me smell perfume and they were like Do you like it? And I was like Yes. And I was just passing by? I wasn't thinking about it like I really like it. Would I really buy it? How much does it cost? Is this something that I actually need? I wasn't thinking of all of that. I just threw the Yes out there. And I know how they use that information. But anyhow it's also very linked to the user's own biases. It's like a lot of the time like most of the time as human beings we want to be liked. And we do not want to disappoint or hurt the person's feelings. So if someone asks us if we liked the product it's very very rare that someone really says no. Something that you can do in order to have a bit of a less biased response to this is to ask things like can you describe the last time you've used this product? Or can you describe the last time you've done a certain action? Or how do you feel when you use this product? And these are both questions that are going to yield really better-unbiased results. Another example of bias that happens quite loudly during user research is confirmation bias. And confirmation bias we're going to basically due to the first piece of information that confirms something that we already believe. A lot of times we can see people cherry-picking information or ignoring certain bits of information in order to confirm a certain belief. It basically looks like this. While you've heard something you want to do your own research but literally just jump on the first thing that already agrees with you. So the confirmation bias in user research can be seen in different forms. Like here we took an example of more description testing. And here's one of the examples: what do you think of this simple menu? And the person would answer something like easy to think to use, easy to use I think, and simple as you said. So most of the time while here I think it's we're actually making a mix of confirmation bias and framing bias most of the time the person in front of us will answer with simple as we already have and confirm what we actually want to hear. Another example is what to Expect When you click here? And the person in front of us says something like? I think I'll see more details. But the good answer is let's move on. So really once we hear what we first answer and most of the time that also kind of confirms what we're looking for we are just going to cut the person in front of us. And the really really important thing to do especially when we're talking to our users is to give them time to express themselves like they are at ease but also really listen. And the moments where we see them agitating are also good moments where we need to dig in in order to better understand their thoughts and the job that they want to do. How can we avoid these biases during the use of interviews when one of the things you want you can do is to dry run interviews the interview with someone from your team this helps you with things like the more practical aspect that you're going to be able to see more or less how much time it takes to run to the interview. And the second one is that if you do this with someone who's well you're familiar with and you're like in a dry run situation you can take more of like time during the interview or more of your brain space in order to observe yourself the way you ask the questions the way the other person answers to questions. Or you can even ask the other person if she's pretty experienced with user research and biases they can also pick some feedback link to this. And another thing that is really important to do is to list all the assumptions before you begin to research. This can seem very normal and logical but it doesn't happen all the time. And I've seen a lot of situations where if you do not really list your assumptions and really make sure that everyone is clear and everyone agrees with the way with the assumption then you're going to have a lot of debate and a lot of biases that kick in a lot of times you'll see certain results. And you'll be like yes yes this validates. But is it really validating? Or is it just that now you are like all the biases of you being more attached to a certain answer and confirmation bias that kicks in? Another important aspect is to stop talking. And listen really make sure that you do not use all the space by Well first presenting your brilliant idea and selling it to the user. Because that's not really the point in interviews but also that you give enough time to the person in front of you to really think through and be able to answer, speak their minds and really really listen to what they have to say. And another really interesting thing and the important thing is not to be note-takers. I noticed that in some things we don't always have time to have no takers and help us out with this. But if you can, it can be very valuable. Of course, you can brief your net note takers on how to take notes more and more of quotes and like in a really less biased way. But what's really valuable as well is that you can afterward debate and synthesize the findings with the note-takers. times I've seen teams do this where the times were actually the results were really less biased and the knowledge was really really really valuable. And last but not least you can also use methods that go beyond what we hear. And really think about also you know what the person is doing, how they're behaving, why they're doing certain things like the what how why method or empathy maps that also make you think about you know what are they hearing and seeing? What's the context around them and their job when they're talking to you?
**Romina Chitic **
As we've seen so far in product design there are so many biases that happen during user interviews and during the research phase really but most of them don't start there. A lot of our biases can come in even before a user ever slips into one of our meetings during one of our calls or sees any of our mock-ups. And this affects who we talk to whose opinion we take into consideration and also in the end how well we truly understand our audience. One example of this is the sampling bias, right? So in sampling bias, you unintentionally leave out a piece of your research audience right? You just leave some people out of the research pool. This happens to us all the time. I have an example of this that happened to us when we were developing a language product. And of course, that's difficult because we were trying to test it and Not everyone speaks four languages and the people in our research team did not speak all of the four languages of our audience. So how do we go about testing for a broader audience? And our first idea was we were just going to interview people in English because that will get us past our language barriers. And I guess you know where compromises can sometimes lead to. We very quickly noticed that we were speaking to a particular segment of the audience. We were talking to either people who were a bit more highly educated and couldn't speak English or we were speaking to foreign nationals but leaving out a big chunk of our national audience. So we regrouped in the middle of the process to fix it. We spread out the interviewing process to include many interviewers with different language abilities so that everyone could speak in their natural language. And we had a lot of people who wouldn't normally interview users like PMS and content managers and even in some cases the CPO interview users. And for us, it was actually an enlightening exercise because not people who don't normally get to interact with our users did so and I think it gave them a new perspective on our product. You can fix it of course but it's important to be aware that you're doing it. Another one that I find fits really well into this category is the membership bias. What I mean by this is that a popular person isn't necessarily a representative test participant. And if you choose your participants from a selective group of people or from a specific group of people the results that you get from your research might not apply to a larger general audience. So I have an example here as well for a bank application that I was working on. I got a list of users recruited for the interviews before I even started to really think about the process. And I noticed quickly that they all had something in common. So the people on this list were what the bank called high net individuals so basically affluent people. And in the bank's view, it made sense right? You talk to people who have a big stake in your success and you keep them happy. And also their opinion matters because they are affluent. But we as researchers and practitioners know that these individuals are maybe not the ones who are even interacting with our applications in our systems right? They might have someone do it for them if they belong to a select group. So basically their experience. And their needs and expectations are not the same as the ones that are the general audience. And this is what I want to highlight: recruiting from a specific group of people can sometimes make sense from a cost perspective from a compromise as a compromise solution. But you always have to consider how this affects or influences the result of your general research. And there are a few things that you can do to avoid these sampling biases. And membership bias. First of all, if you should have a clear definition of what your audience knows we're addressing we're trying to get to the end and come back to this definition later on in the process just to make sure that you haven't deviated from your original goals. Secondly, have a look at the numbers, try to see all the data that you have available to you, and create an in-depth picture of your customer base and also possible clients. And thirdly, involve a diverse team to bring in more perspective. I like this one because it's human that we only have our own experiences, we can only contribute so much to a team. And at the end of the day, that will limit the things that we have personally experienced and will limit the way we approach a project. And if there's anything that you could do about it in your own team, try to get a more diverse group of people involved. You'll have a wider perspective on the project and you'll probably be able to discover and address more of the needs of your users.
**Alex Lung **
So we've talked about a few different biases that then might ask okay and so what we are human we're all biased. So what if we're very biased? And so we're going to talk a bit about what is the impact that our biases can have. And the first one that can come to mind is a financial one for all different businesses is that the fact that we do not listen to our users truly are interpreting the information they give us incorrectly can lead us to build products or when design entirely new products or businesses that will not satisfy the needs of our users or will not solve their initial problems. So we will end up with products that either don't get product market fit or they don't grow as expected. So the first impact of our assumptions can be negative financially. But this also goes beyond that. It really impacts the lives of our users. And an example is Google Maps quite a few years back they launched what we call the cupcake feature basically for going from point A to point B we were not only they were not only showing the way to go there but also how many calories you would be burning with little pink cupcakes and saying how many of those mini cupcakes we would be burning. So as soon as these went out a few of the responses were very very loud around like shaming people for eating cupcakes for Okay having little pink cupcakes out there. But also the impact was really really strong on people that had eating disorders. And this can be really really dangerous seeing these kinds of features out there. Food and calories or really like not considered something that goes into Google Maps feature. So Google took this out very very quickly. And I think it's still a very interesting example of how to take into account the needs of the different users and how this can affect certain parts of your target. Another example is Barbie. Same a few years back Barbie launched Hello Barbie so for years now kids were saying I want Barbie to talk to me. So they created Hello Barbie which was actually talking to kids. They train Barbie with AI at the moment when they actually launch Hello Barbie. She had quite a limited vocabulary and topics of conversation. So a lot of times the conversation could go something like this. Oh, Barbie I want to talk about let's talk about science for example. And then Barbie would answer something like yeah we can talk about science. But let's talk about fashion. That's more interesting. And so I guess you can see where I'm going with this. And of course, I simplified a bit the conversation that Barbie could have. But it really went towards these kinds of assumptions of the limited conversations that Barbie could have at the beginning and fashion. One was one of the preferred topics, let's say or chosen topics. And I think this speaks a lot about how the product can influence in this case generations of girls to come. And how important our responsibilities are when we build this product.
**Romina Chitic **
Talking about biases and science there was this recent study that proposed that they could determine whether a person would become a criminal by evaluating a single photograph of their face. What do we sacrifice to science right? So more than 2000 researchers, students, and scholars oppose the publication of this study. They argued that historic forms of discrimination only get amplified by machine learning systems. And we also have to wonder who would be impacted by integrating machine learning and existing institutions in our society. Given our own constantly evolving understanding of fairness and justice and everything that we know about how the justice system works in our current world do we even think it would be possible to train an AI system that would be unbiased with the information that we have so far? I think someone fairly said that first, we build the tools and then the tools build us. Another example that I'm actually quite attached to is a decision that started in the 1950s when there was a research project that aimed to determine what were the causes of heart disease and stroke. So this study you may have heard about because it was quite famous in the US. There were 22 countries in the original study but only seven of them seem to confirm what the researchers were trying to say. Here we already hear bias slipping in right? What the researchers were trying to say is that fat and cholesterol are the causes of heart disease and stroke. And because they were trying to confirm that they left 15 of the countries that didn't correspond to that information out of the study and only based their conclusions on seven of the countries in the studies. And as a result of this one study diets changed all over the world. And for the last 70 years, we've all been seeing the conclusions of that. It's taken decades to reverse those beliefs. Today we know better. We know that fat Isn't the enemy, that the picture of nutrition is way more complex than that. But you can see easily how decisions made in one part of the world decades ago can over time grow to affect everyone. And then we have biases and decision making like we looked at recruitment and asking the right questions. But even when you've done all of that right I think there's one hurdle left. And that's the human mind. Because we all like to think that our decisions are rational yet study after study tells us that that's hardly the case. Sometimes wrong is the price that we pay for being able to make quick decisions for being able to make decisions on the spot. And evolutionarily speaking it's better to make a bad decision and no decision at all. So one of these examples of decision-making is the sunk cost fallacy. You might have heard about this one because it's quite universal but it is still perplexing. So you see it in things as simple things like going to the movies right? So you're probably watching a bad movie right? But you will stay for all of it because you paid for the ticket. And Rationally speaking it's maybe not worth your time. But the sunk cost effect pushes you into staying and continuing with the whole event or you sign up for a paid event where the topic isn't exactly what you were hoping for. But you can't follow what's been said. But the smart thing at that point might be to leave right? But you spent $100 and you end up wasting your two hours as well. Even if you quit the event, let's say your money isn't coming back. So you've already lost it. But unless you learn something in the two hours that you're there then you're not only losing the money but also your valuable time. This we can see a lot in projects I think especially in product decisions where a lot of money has already been invested into a certain product that maybe isn't working out the way we wanted it to. And we will spend more time and sometimes go even deeper into that project rather than have the original investment turned into a waste. And it can be hard to understand and accept at the moment where it's happening and even lead us to further losses right we're likely to continue into an activity if we've invested money or effort into it. And we'll even go against evidence that shows us that this is not the best decision for us. We've spent the money we've put in the effort. And we're not going to get that back by doubling down on the decision. But the sunk cost fallacy works on us because it's based on more than the alternatives currently in front of us. It's an emotional decision at the end of the day.
**Alex Lung **
And another bias that we encounter when making decisions is the ownership bias. So I think that what comes to mind about this is that we prefer our own ideas over others. And we really get attached to our ideas. I think one of the examples that come to mind like this is workshops for example brainstorming. During workshops, a lot of times participants will get attached to their ideas depending on how seasoned they are in brainstorming and how self where they are. But a lot of times you will see participants that really cling to their ideas and want to push their ideas to be chosen in the end. And I think this also happens with ideas but also with different things that you do at work like all the effort that you put in but also more generalized when you're building a product or a feature or a company and you become more and more attached to it. But it's not very easy for you to take that step back and we make more cold-hearted decisions. So we're getting to the end of our presentation. And I think it's really interesting to see all the different facades and all the different biases that can kick in. And we've only talked about a few of them here. But the truth is that we are all human rights. So it's normal to have these biases. And we cannot get rid of these biases completely. And especially we cannot get rid of these biases overnight. But what we can all do is to be conscious that we have blind spots and to change our behaviour in order to keep them in check. Keeping in check by avoiding assumptions, listening to others, and being really really mindful. So it really helps you strike that balance and in the end, it really helps you become a better professional and even a better person in the end. Thank you very much.