Experts at Scale: Solving Problems with Process, Professionalism, and Politics
Experts at Scale: Solving Problems with Process, Professionalism, and Politics
Kevin, Manager of UX Research, and Brandi, UX Research Program Manager at LinkedIn, have been solving problems of scale for the past two years. By identifying why problems exist, understanding the dynamics within the organization, and ultimately developing a series of toolkits to enable faster research across all areas of the organization, they have been able to create sustainable, rapid programs of research. Their programs include enabling Designers to run research sessions with employees, conducting 3 weeks of back to back research with full stakeholder engagement, and collecting quantitative usability stats. They will offer examples from their journey to create a systematic research flow enabled by clear process and templates, which helped designers and product managers gather quick feedback, as well as gave Design a seat at the “numbers table.”
In this talk, Kevin and Brandi will talk through:
- Scaling research at LinkedIn and how they built programmatic research approaches
- How the introduction of templates enabled better stakeholder buy-in
- Evangelizing UX across the organization to enable other teams like talent acquisition to participate in research and amplify their voice earlier in the development of a LinkedIn product they use every day
- How they created a streamlined process that enables qualitative researchers to produce valid and reliable quantitative output
Kevin Newton: Hi everyone. And thank you for tuning into our talk. My name is Kevin Newton and I lead the user experience research team at LinkedIn charged with scaling the impact of UXR while maintaining rigor.
Brandi Amm: I'm Brandi Amm. I'm a Research program Manager here at LinkedIn who works with our user experience research team. I work with Kevin's team to identify problems and produce solutions, processes, and guidelines at scale. We are here to present our simple process for scaling research to achieve continuous learning about products.
Kevin Newton: Thanks, Brandi. First, we will explain the framework we use to scale and then most importantly, walk through two case studies to highlight how the framework comes to life in the real world. Now, if you're watching this before the UXDX conference, I hope you'll join me on Wednesday, June 16th, 2021 at 10:55 AM pacific time to discuss Continuous Discovery and Practice live with Teresa Torres and facilitated by Flavia Neves with that let's get started. With this framework is our guiding compass for what, why, when and how we create successful programs of research at LinkedIn, it's quite simple. Three principles. You'll notice they're very similar to how user experience is approached more broadly. This is intentional. When we consider scaling research, we think of it like building a product that will be used by teams for creating continuous discovery. So, you can see, you got to identify a problem, design a solution and then make it easy and consistent. Let's dive a little deeper into each principle. Identify a real problem. Just like when a company creates a new product, it's crucial that whatever is created solves a real problem. Only this problem exists within the company or organization that you worked for instead of out in the public to deepen the analogy with the startup culture this step when done correctly ensures that the process built to scale research is an aspirin. Something that someone needs to relieve a pain as opposed to building a process that is a vitamin. Something that could provide benefits sometime in the future but doesn't feel necessary in the present moment. A simple way to do this when it comes to scaling research is to ask yourself two simple questions. What needs, if any, aren't being met? And why aren't they being met? You may need to do a little research; talk to your stakeholders and colleagues to discover what the actual problem is. However, there is a danger here don't scale research, just because you can or because you were asked to by your boss. It will be exponentially more difficult if you are not solving an actual problem. So, once you have identified an actual problem, it's time to ideate on how you might solve that problem. Similar to the design thinking process. These ideas should not be limited on how you would need to solve this problem. This step really is about answering the question on the screen. What must be true for this problem to be solved? If the problem is, for example, you have no dedicated researchers to solve that problem, what you would need to be true is that you have someone who can conduct research on a consistent basis. Like the last principle, this has a trap too. It would be easy to say we need to hire a researcher. So, we don't have anyone to do research or we need to secure headcount for a researcher. And the trap here is there are many ways to solve the problem when the criteria does not include the solution. In this example, having someone to conduct research on a consistent basis is not the same as needing to hire a full-time researcher. Once you have a solution agnostic criterion then you can find solutions that are both easy and consistent. At this point, the question and the answer are all about efficiency, keeping in mind that the research needs to be rigorous and valid. Of course, let's take the same example needing someone to do consistent research. So, what's the most efficient way to make that true. We do have head count. Great. Then you can hire a full-time researcher. Do you have budget instead? Maybe you need to get a contractor or hire an agency to start doing some research for you. I do have a research team within marketing. Could you maybe partner with them to run consistent surveys? It's not the deepest learning in user experience, but it could help show the value that then leads to more investment like the other two principles. There's one thing that you really want to stay away from during this step, do not over-complicate it for those who are going to carry out the process. That second part is a very important qualifier because the process or system or program that you designed to solve the problem could in fact be super complicated in the background for those running it and participating in it the process and experience should feel easy and consistent. So, there it is. Three simple principles. Now, Brandi's going to walk you through our first example about how these principles come to life in the real world. Brandi?
Brandi Amm: Thank you, Kevin. The first case study, we will be speaking about are Lemonade Stands. For context in the beginning of creating programmatic work at LinkedIn, we love to tie them all together with fun food names that helped represent the type of research being done. Not only was this a way for us to add humor but it was also a way for those who are not aware of UXR to gain some idea of how the programs work. So, what is a lemonade stand? Lemonade stands are quick 30-minute sessions where internal LinkedIn employees can sign up and provide feedback on about to launch design flows and interactions. These evaluative sessions revolve around concept testing or usability testing with our internal employees. Following the framework Kevin spoke about before our first step was to identify a real problem. In this case, the problem came to us while working well with our partner teams, we still often found ourselves in the position where we were still not meeting the needs of our design and product teams. Our first question was what needs are not being met? First, we need to be able to do more evaluative work on designs that we're getting ready to launch. While we were certainly doing some of this work, it was often lengthy and there was always the ever-concerning point that looming timeline and with more products being launched, we needed a system that worked in a timely fashion at scale. Second, we had internal employees who were not only using the product but where some of our biggest evangelizers to our external customers, while there were some internal avenues to give feedback, we wanted to create a more robust system as well as a way for our employees to feel that their insights were being heard and implemented. As we all know, part of solving the problem was being able to identify the root of that problem. Our next step was to ask why. Why did these problems exist to begin with? Well, ultimately it came down to the most basic of problems, a lack of resources and a lack of established processes. We did not believe we had the research and research operations resources to support both doing the evaluative work but also doing it a larger and quicker pace. We had no processes for quick evaluative research at scale so we've been hoping for. We needed to reduce the friction, create and streamline our processes to use less resources and bring our stakeholders along for the ride. Now, that we had defined the true problem. We next had to ask ourselves a few questions to start designing a solution. What must be true of breast to successfully design a solution? Well, our first thought was we had to reduce the bandwidth required for the researcher. This led to the idea that the lemonade stands must be designer led. With the designer as the researcher, we were reducing the time the researchers need to be involved in in-sessions and increasing our partners involvement in the research process. This was an ingenious idea because it was a way for our partners to be fully present in the research process while also creating empathy and deepening that collaboration between our teams. We next had to identify how the researcher would be involved. We have found that outlining roles and responsibilities is a key way to reduce friction. So, we. Put the researcher in as the role of the consultant, they were a key partner in the process, but leaned on the designer to lead while providing a foundation of knowledge needed to be successful in doing this type of research, they could also be a resource for the designer. So, the designer felt supported and that we were setting them up for success. Our next, and of course, in my humble opinion, the most crucial step with streamlining operations. While streamlined processes isn't it. Every operation professional’s handbook, this definitely wasn't an easy task. We had to consider all the touch points in the research process for our research operations team and then either eliminate the touch point with tools and templates or reduce the bandwidth required for our research program managers, lab operations and research coordination team. Well, I'll tell you this could keep me up at night. I'm really only partially joking. Lucky for me, I had a great partner to work through these problems with. The last solution we had to design was around closing the loop with our internal employees. At LinkedIn, we have a saying, "Relationships matter" and we take this very seriously. Our relationship with our internal customers, well, it matters their voice is important and it's well worth hearing. So, when we were designing a solution for the problem having no internal customer voice, we wanted to make sure that this is a relationship that not only helped us but also help the customers who came into work every day at LinkedIn. We wanted to be able to gather these valuable insights while also connecting with our internal voice and making sure that they felt heard. Once we had identified the problem and designed a solution, it was then time for us to consider how we would make the solution easy and consistent. What was the most efficient way to make those things true? Well, our first step was to create a self-sign-up sheet through our internal project tracking tool that designers could utilize to sign up for a lemonade stand. We then attach this to an easily remembered internal site. This site allowed designers to see upcoming dates and sign up at their own convenience. Once they signed up a researcher and a research operations person would be notified and they could be assured that the dates were confirmed for sessions, research operations then work with our researchers to create planning templates for designers. These templates would then be reviewed by the researcher. This would offload the burden of planning from the researcher to the designer and create consistency between different groups of sessions. The consistency streamlined the process and allowed us to go from multiple planning meetings down to if needed one. Oftentimes it could still be done asynchronously through outreach and evangelizing the program, we were able to create an internal database of all eligible LinkedIn employees who would volunteer to participate. This probably required the most time from both teams. We spent multiple meetings working with our internal employees, leadership to align on how this program could help both teams. Over time, we expanded the database as more internal employees have heard of the program and wanted to be a part of it. And lastly, we created a spreadsheet template for the results. This was then reviewed by the user experience researcher. We would leverage the spreadsheet to close the loop. This included not only sending comms about insights to the relevant design and research teams, but also to our internal employees. So, they could see their feedback was being used and moving us forward in the process. This is how we use the framework to create lemonade stands and solve the problems we were seeing within this area of business. Now, I'll pass it off to Kevin and he'll speak about another program we created, Project Ripple.
Kevin Newton: Thanks, Brandi. Project Ripple is the code name for our efforts to quantify the user experience. It's called Project Ripple for two reasons. First, we knew we would want to start small and spread across the company very much like a ripple in still water. And the second, Ripple is the title of a song by the grateful dead, a legendary San Francisco jam band. There's a line I will mention in just a minute that fits this initiative perfectly. So, let's take a look at the framework. Step one is identify a real problem. So, when we asked ourselves what needs aren't being met, it was obvious to us in the design organization when it comes to this area. There was no consistent assessment of the user experience at scale. Sure, we tested things in the lab. We uncovered problems to be solved and we gave directional feedback on design choices, but there was no way to measure whether or not the experience was getting better. We have a lot of metrics that guide decision making at LinkedIn, but none of them capture what it's like, what it feels like to use the product. They certainly don't do so in a continuous and consistent way. So, another important problem to solve was design not really having a seat at the numbers table. When a designer made an argument for why an experience should be like X, Y, or Z, if a PM or a data scientist or market researcher had numbers to suggest that the experience was doing fine as it was there wasn't much design could do to convince them or to argue against that. We needed a guiding number. When thinking about why these needs weren't being met, there was a process reason. We didn't have a process within design to measure the user experience. There was a professional reason. We had built the UXR team at LinkedIn to exclusively conduct qualitative research. That's what we were known for. Right? We were a team that dove deep into the lives of our customers and members to understand their experience and then present a case for why LinkedIn should care about this problem with outcome. Lastly, there was a political reason. We already had so many metrics that functions outside of design. We're not particularly interested in just another number. So, now it comes to that grateful dead lyric that I promised. It goes ripple in still water when there is no pebble tossed nor wind to blow. The company wasn't asking for this. There was no catalyst like a pebble or win but we knew that this was a real problem within design and we believe if we could solve it for them, for us, the value would spread from there. So, that's what we did. We set out thinking what would have to be true to solve these problems. I mean, we knew that to have a clear process, the methodology would need to be airtight, rigorous, ballad, repeatable. This would increase both our credibility in a new space, quantified UX and the longevity of the program by allowing any new hire to learn how to conduct ripple studies with ease. We also needed a way to consistently run studies with 300 participants. We typically recruited 8 to 20 participants for our UXR studies. So, this was a departure for us and we needed to figure it out. In terms of the team, we needed to ensure that qual researchers could conduct client analysis. Thinking back to when I introduced the framework, remember the "how" these are going to happen. Doesn't necessarily need to be settled at this stage can be, it doesn't have to be. All we knew is that quoll needed to be able to do quant. And lastly, as far as the political reason goes, the value offered to the product. Into the product organization needed to be blindingly obvious, simple, right? Not only did we need to make those things true but we needed to make it easy and consistent as well. So, we asked ourselves what was the most efficient way to make those things true? Well, first one. Developing the process. We immediately partnered with an expert in the field to develop the methodology. This saved us the time of trial and error, prevented us from reinventing the wheel. Although we did make LinkedIn specific modifications, when necessary, we leaned on the vendor on the agency to help us develop this process and this methodology. Next, we once again, worked with the operations team to create templates that could be leveraged by any UXR to create a ripple study that would look almost identical with the only difference being the content tested the metrics collected, the questions asked when they were asked, where they were asked and how they were asked would all be the same every time. We leveraged a tool, a software named UserZoom to collect the larger amounts of data and worked with the operations team to create a straight to participant recruiting strategy. Typically, there's a step where we recruit people who indicate they are interested in participating in a UXR study. Then from that list, we invite them to the actual study. Well, this new strategy cut out the middle step, where we simply assess interest in just in everyone who clicked the invite to the actual UXR study as far as the qual researchers doing quant that was a challenge in two ways. I mean, first it wasn't their skillset. And it, wasn't why we hired them in the first place. So, we didn't expect it to be their skillset. And then second, it wasn't necessarily an area. They were excited to learn or explore having a background in anthropology. I know a lot of qualitative researchers and I have a feeling there aren't too many of them who just gets super excited about studying hypothesis testing statistics on a Friday night. So, we took the same approach we took with the lemonade stand, how can we make this easy and consistent? The answer was to build a self-updating calculator in Excel, which let me tell you is super complicated but it feels easy. This allowed researchers any UXR to follow the study templates that we had developed with operations, create a ripple, study in the same way, download the raw data from UserZoom and essentially paste it into the Excel calculator. And then instantly have a dashboard that tells them in plain language which groups, if any, have a meaningful difference, have a statistically significant difference in how they experienced the product testing. It was pretty cool. Lastly, we have had an overwhelmingly positive response from our product partners. They often ask, when are we running another ripple study? What is the future of the program? When will it be run consistently on all of the products? Additionally, what we are working with other research functions to connect our metric to the popular metrics used within LinkedIn. So, you can think NPS or CSAT, Customer Satisfaction or even data science mappings on how people use the product. Our goal is to be able to connect our metric with those metrics and to use NPS as an example, we might be able to say, if you move our score, the ripple score by X points you can move MBS by Y points. Eventually we hope to have a program that is measuring our experiences in a continuous way, complete with a self-service dashboard to explore product scores and what issues are causing those scores so that any really any team member at LinkedIn could just go there and understand how their product is doing and what the experience feels like for their users and that is Project Ripple. So, there it is three simple principles each with a fairly strict warning because we've found that if you solve a real problem for someone by understanding what needs to be true in the design of the solution, and then find the most efficient way to make those things true through creating ease and consistency for everyone involved. Research will scale itself when it does the value of UX goes up in the company and the team learns together more continuously and more consistently. So, thank you for listening. If you have any questions, feel free to come to the Q and A on the 16th or find us on LinkedIn and let's start a conversation. Thanks.
Got a Question?
More like this?
Tue, Jun 15, 9:30 PM UTCBreaking Down Complex Problems - Implementing Change