Why Failing Fast is Not Good Enough


Why Failing Fast is Not Good Enough

Product Direction

Product development failure is a common learning experience for most teams, but do we always use it to its maximum potential? How can we look at failures more objectively to better future development? At OLX Group, Praful Poddar uses failure to learn if his team are getting smarter and if they're generating an ROI. To fully utilise failure learnings and create next-level user experiences, failure needs to be viewed differently. In this talk, Praful will discuss:

  • Problem definition and its close link to failing correctly
  • Failing fast with the goal of reducing the cost of failures
  • Including failures in your planning process across the product lifecycle to learn better
Praful Poddar

Praful Poddar, Director of Product Management,OLX Group

Thanks everyone for joining today. Good morning. Good afternoon. Good evening the reason I'm starting off with this picture is am truly excited mainly for two reason. The first it's 2021 and I'm hoping and wishing that this is a much better for all of us across the globe from a variety of perspectives and the second the opportunity to talk to you guys and really share some of my humble learnings from my experience. I'm very excited to be talking to some people who are looking to learn something new today. I also do want to thank UXDX for putting this forum together. I think it's truly an interesting forum to network with like-minded people and learn something new for myself today as well. A little bit about me so I'm a product guy. I like using logic a lot. I'm learning how to keep things more and more simple. Ironically though that's really difficult. On the personal front, I do enjoy reading and playing a bit of Squash although COVID did put a bit of a roadblock to that, but I'm getting back to it slow and steady. And one of the advantages of being at home is that I'm building new skills, so my four-year-old is keeping me up to date on the latest in Play-Do and Lego which is happening for me as well. For the ones that are curious on how this has spanned. I put this together as a spoiler and taking a risk here so that people don't jump across. I roughly think if I have timed myself right, I would spend around 20-22 minutes here discussing the topic with you guys. We are right now just setting this up the beginning. I do have a few learnings that have tried to condense into three broad points that want to talk to you about by taking you through a very interesting story that we did on the ground and jump and conclude everything towards the end with a quick summary. I would urge you not to jump there already because it will be a good experience for you guys to listen in on all the learnings because they're fairly well connected and will really make more sense. So, let's get right into it. This is the topic that I wanted to discuss with, "Why failing fast is not good enough." A provocative statement and purposefully so. Don't get me wrong at all. I am a big fan of failures, some of the projects and the teams that I've worked with before. I think we've celebrated failure, learn so much from having failing fast and applying those learning to actually taking more impactful products to users. However, when we reflect and we really analyse those in a little bit of depth, there is almost another dimension and a different care, which I feel needs to be managed and understood about failures and what I want to take you through is how we uncovered some of those. When we really went back and started to reflect as a group, as a team on how the last few quarters last year has gone by what have we released as successes and failures, it really does. And I'll try to condense these into three points that really, we took away from all those analyses that we did. The first was that while projects were going on, successes were happening, failures were happening. The broad sense of are we getting smarter? Just don't seem to be there and that was reflected in the fact that people not talking about learnings as much as we would have liked to. The second, when we really tried to do this high-level analysis of having different kinds of failures in terms of size, nature, team and really laid them out over a timeline. The frequency of the failures, what it was now and what was a few quarters back broadly was remaining the same. So, we were not really improving on that front either. And the third one to our surprise, we actually had no objective view on what this was costing us. How many will be pumping into these failures and what was the net ROI that we were being able to generate. Were we really contributing back to businesses and consumer value towards the end? Armed with three, these three insights. What we actually decided to do is, you know, take sort of a different path this time around. Pick up a team, pick up a project and try something differently with the perspective of optimising our failures from what we've learned from this reflection and retrospective and what I'm going to do next as take you through the story of how we implemented some of these learnings on the ground, what we did, what worked and really condensed them into three points that you can take away as practical applications for yourself. So, before I get into the story itself, let me set this up well. So this is the organization I work for OLX Autos, essentially what we're trying to create is a differentiated used car and buying and selling experience for users, very simply put is a person who wants to sell this car, comes to us, sells the car, we refurbish, shine it up and then sell it back to the buyer. Simplified view, it's a C to B to C module. What our platform also allows is for people who interest generally selling tasks to end consumers themselves. We also facilitate that not just in helping buyers and sellers connect, but also providing a lot of services on the ground to make sure that the transaction can be much more seamless and a better consumer experience. So, this is the setting. This is a key component that happens currently in a used car process: inspection. I'm sure most of us are quite familiar with what happens in a car inspection. It's a fairly technical product that you're trying to sell and buy it right now and also something that is used to really understand the objective condition of this and arrive at an accurate price. This process is super important to this journey. Although a lot of advances has happened in this area over the years, but it's still quite manual. You still need a guy who's fairly an expert can go through this checklist of spots. Look at both digital and physical data and really make a judgment call. But what's happening. Things are changing as well. What's really happening across the board. And a lot of companies and people are trying this out. We're really trying to change this process in itself. We're almost trying to say, 'Hey, can we solve this better through technology, not just making it a better consumer experience but also making it much more sustainable and scalable'. So, picture you as consumer, almost doing this entire process by walking around your car, just with your phone. And we make that happen. So, that was the hypothesis or the problem that we were trying to solve in the area that we took up to apply these new learnings that we had come across with respect to failures. And let's dig into what we learned. So, the first one really was that, by feature, we as product technology design folks we are in a rush to build. We want to take new solutions to consumers very, very fast. What happens a lot of time because of that is we don't end up spending enough time in really defining what we're solving for first. Let me quote, Theodore Levitt here, so he's a professor from Harvard he quotes in the context of people wanting to buy quarter inch drills. "People are not actually looking for quarter inch drills. They're looking for quarter inch holes." So, it's really the core need that we're going after. I doubt you, let's extend that further. It's actually not the quarter inch hole. They're actually looking for the painting on the wall. Take it a step further. They're actually looking for the aesthetics that brings along to the room. All the feeling when I get, when I wake up to that, that painting. So, if you continue to do this exercise of keep going, we're trying to figure out the real goal user need that you're trying to solve for and really deliver an impactful solution, how that's helping. And then we bring back our example of building this state-of-the-art inspection process. What we did before we jumped into building something was really try to a certain where should we focused then we build the solution. Should be focused on the seller in which case we probably will be solving for convenience and bringing the transparent prices along to this user or should be focus on the buyer? In which case we're trying to solve for trust, with this car am I getting a good deal or not? Will this car not break down six months after? And after enough deliberation and thought process, we were able to arrive, that what they're actually trying to solve in the end is really for the buyer while our initial bent was, Hey, this is probably a seller-oriented solution and how that helped us. What that really helped us was to design the experiment and what people are going after, right? So, this is an inspection report and what we're essentially trying to do here is now gather these data points through this new experience that we've built. There are two ways to approach it. We can either collect all the data points that the seller can give very conveniently. or we can actually focused on the data points that the buyer needs to build trust, almost at the confidence to put down on an advance amount online and armed with the information that it is a buyer focused experience, you are able to build the right experiment that we wanted to solve for buyer trust, which could actually lead to some seller inconvenience in the end. Basically, what we did was we avoided a bad experiment design at the end by focusing on the problem which helped us not go down the path of a false negative or false positive, essentially non-usable learnings that came out of a particular thing. That was learning one. Learning two is really to understand the cost that comes along with the failure and how these costs shoot up as we get more and more invested into a particular solution. Let me again, bring back our example so the solution that we're building, the initial thought or the initial, I would say idea was. Hey, this probably will be a video-oriented solution because you know, we'll get people to just walk around with their phones around the car, both interior and exterior and give us a view through a video then we can just get this video slice and dice it. And I will get all the answers that we need and the decision making that we wanted to do. And there were a lot of reasons for contributing to this kind of a thought process, we had a lot of platforms that have come up with video-oriented solutions in the consumer space, the markets that we were operating and a lot of acceleration was happening there and there were options. People are getting very comfortable with videos in general across the board. Network and data speeds were also improving. So, it was becoming easier to support videos in the mobile network. To add to that, we're also built somewhat of a solution in another category of platform. So, our platform also allows users to sell anything used, you know through our product and this is an example of an electric appliance, a mixer-grinder and we had built a solution where if a seller wants to really showcase the working condition of this particular product, you can make a quick video which the buyer can use for decision making. And the thought there was, Hey, why don't we just repurpose this for the car category and we'll be up and running with the solution. And you're very tempted to go down this part and get invested in this idea very soon. But going back to a reflection and saying, Hey, it should be first figure out if there is enough evidence for us to move along this spot and really reduce our chances of failure. We did two things. The first one was, Hey, can we be really brief and try this out today without even building anything? So, we have our stores, we have inspection engineers and we have consumers and we have video capabilities. They just might not be on our product, but people are fairly comfortable using a WhatsApp video calling or their native camera phones. Can we just utilize that? So, what we did, we started to do some on the ground pilots. So, we got people who were not interested in bringing the content to a store, get on video calls with our inspection engineers. Almost like a guided experience where they were running the car inspection engineer through the process. And if people are not ready to do that, we got them, gave them a little guidance and say, can you make a video and just send it to us? Which would give us a bit of learnings. Can we actually utilize these information points? The second we were not the only ones in the market are trying to solve this right now. There are a lot of SaaS based organizations, insurance companies, OEMs, and other competitors that are trying to build something similar and there's a lot happening too in the market. So, we actually went out and spoke to a lot of people and we came across a bunch of solutions from AR to VR, hardware, software video. So, piecing these two together and very, very critically examining what we work for our consumers, what benefit we will get, what happens in the markets that we operate. We actually came out with a completely different conclusion. We came out to the conclusion that video is not the path to go. Let me try and explain why. First of all, it's not a super consumer experience. Like picture yourself, trying to make a two, three-minute video around your car. It's so awkward at you don't know whether you should be talking while you're making the video or you shouldn't be. If you're trying to open the car, and if you're trying to show us the engine, the camera's going all haywire, it's not a good experience. Secondly, the moment you switch your camera from a photo to a video, the quality dip significantly, the quality that you get of the video is actually much lower than what we would expect. And third, the size of the video that actually gets created, which is multiple hundreds of MP3s at the end. It's just not practically feasible to get that uploaded and shared in an optimised manner. So video was clearly not the path for us to go. In fact, what we realised was a very simple image and a smart image capture experience, which is guided on how these images should come along, along with the smart phone, where it's more or less a tap-tap experience for the user to give us a bunch of information around the car, works beautifully for us to get all that information that we were trying to do with this video. So, you see if we did that. And if I look at this cycle of product development rather than failing at the gates of a launch, where we would have invested a lot in this idea, built it launched and figured out, Hey, it's just not working. We actually failed really frugally. We failed at a stage of concept itself, which acquired a lot less resources and a lot more cost, a lot more to speak in mind. That's learning number two. Learning number three and this one's very interesting, so bear with me on this. I'm sure most of us are familiar with OKRs. I think it's a fantastic tool and a strategy to figure it out and give teams really actionable goals that they can own while I do feel there are some challenges when they get implemented on the call, what really happens practically speaking in at least the digital environment that I work in. You think of a quarter-on-quarter OKR cycle. In a quarter, what will happen is all of the work around discovery, which means, Hey, what are we supposed to build gets front loaded into the portal. So, everybody's scrambling to figure out what gets build and all of the launches, other deliveries happening towards the back end of the quarter. What we're trying to do there is it's not the most optimal use of resources, the kind of resources we need when we are doing discovery work or the delivery work are very different. And if all our initiatives and projects are at the same stage of discovery or delivery, we are actually pressurizing these resources a lot. What that does is it brings down the quality significantly and we are increasing our chances of failure. So, we looked at this and we say, Hey, how can we systematically and by on purpose, change this. So, just to give you an idea, this is a very oversimplified flow and what you would see in a quarter is the problem, opportunity, or hypothesis will probably be in the front of the quarter and the launch and the analysis will probably always be happening and pitch a bunch of these things happening for all the initiatives that are there in the OKR. So, what we're saying is resources like design, research, prototype testing, product are all and constraint on doing problem opportunity analysis for all the initiatives in the front of the portal which does not provide as much quality as you would expect again, bringing up the chances of failure. So, we systematically tried to change again. The same example... So, let's assume we had these 5 or 6 initiatives that we wanted to pick. So, we wanted to integrate back with an OEM to actually get cars service history. We wanted to change a bunch of things on the UI. We also wanted to do some data science things where we can extract text from the image and then play around with the form that we had in terms of configurations and ordering, so that we can get the best possible information. And what we did was rather than having everything happened at the same time, we mixed it up at a particular timeframe. We said that, Hey, all these projects need to be systematically be driven at a particular journey of the life cycle at a different point in time. Some will be more in discovery; some will be more in delivery. What that effectively did was it eased the pressure on the resources that really can work on this with quality. Lessening our chances of failure going forward. And if I extend this a little bit further. Picture that you have a lot of these initiatives do need to work on a particular timeframe, how you would organize or plan for this. Almost think of having different kinds of things happening along the product life cycle at a particular point in time and what that does is it lessens the pressure on the kind of resources that really work on this good, better, best quality, one more thing. And we could use this in multiple, I would say capacities. So, let's picture, there's a product manager or design manager that has to do this for his one project but multiple tasks, we can actually zoom into a particular task and say, can I plan my tasks in a stage where all of them are at different stages at a particular point in time. And we feel zoom out a particular level where we are seeing almost a manager that needs to plan it for multiple teams. Can I get teams to be working at different stages as well? This also done two things. And particularly in the view of stakeholder management expectations is that: one allows a lot of room for us to spend time on the larger initiatives. So, in this particular example, look at initiative four where we need to do, give a lot of time for us to do discovery work here what's that really done is while all of the other initiatives are constantly delivering something. We've got this cushion to actually work on this in a little bit more depth. This also does a lot of visibility on just on the progress, but also the process that product follows who stakeholders and manages expectations a lot better. So, that's learning number three. Those are the three learnings I had I do want to bring all of this together in a quick summary. The first one, do spend significant time in defining what's your quarter inch hole. What we essentially trying to do there is avoid bad experiment design and go down the path of false positives or false negatives. The second one is thinking of the costs that you incur or feeling at a later stage and trying to validate at each stage to save on that cost later on in the journey of product development and the last is to ease the pressure on the teams that need to work at a particular timeframe in the life cycle, try and spread across the work systematically across all your initiatives at a particular point in time. You've been a great audience. Thank you so much listening and spending the time today. I do want to leave you all with this one quote and quality that I value a lot for myself and that shaped what I am and what to do today continue to stay curious guys, keep learning and keep an open mind take care. Cheers.