Ramu Kallepalli
If you are a product manager, designer, lead engineer or product leader, your job is to solve customer problems and drive business value. How do you do product discovery? In this book, Teresa Torres lays out continuous discovery habits by answering how to set outcomes, uncover customer problems, prioritize, come up with creative solutions, test assumptions quickly, and a lot more. You will develop a deeper understanding of the customers’ needs, problems, and desires through regular contact and simple research methods. You will shift away from delivering outputs to fewer goals, more clarity and focus on solving customer’s problems in ways that drive business value. You will collaborate with most of the team involved in customer interviews, mapping the customer journey, ideating on solutions, and discussing results. You will have tools to show your stakeholders your thinking easily and have better conversations about where to go next. You will improve customer and business outcomes. Your team will gain confidence.
What Is Continuous Discovery
How do you know you are making a product that your customers want? How do you improve it over time? How do you guarantee that your team is creating value for your customers in a way that creates value for your business? All product teams do a set of activities (discovery) to decide what to build and then do a different set of activities (delivery) to build and deliver it. Companies put emphasis on delivery while underinvesting in discovery. A digital product is never done. It will continue to evolve. We need a continuous discovery framework to discover new products and to iterate on existing ones. We need to continuously discover unmet customer needs (and solutions to address these needs) and to iterate on existing ones.
In 2001, group of engineers came up with Agile manifesto. They advocated for a) shorter cycles with more frequent feedback, b) constant pace (sustainable), c) maximum flexibility, and d) simplicity. They were concerned with how much of what they built was never used or offered limited value and instead advocated for teams to ruthlessly limit what they built.
More frequent releases meant we could measure the impact of what we were building sooner. We got better at instrumenting our products, usability testing our solutions, starting small and iterating to bigger solutions. We still struggled with deciding what to build. We still learned after shipping code that we’d built the wrong stuff. We started to question how we made discovery decisions. Instead of making them in conference rooms with just our own thoughts, we started engaging customers throughout the discovery process, instead of just validating our ideas at the end of discovery, we started co-creating with customers from the very beginning. Our discovery candence started to change. As our delivery cycles got shorter, so did our discovery cycles. Many teams are adopting, developing, and iterating on their own continuous discovery practices. They are engaging with customers on a regular basis and testing their assumptions. Rather than validating their ideas, they are co-creating with customers — combining the team’s knowledge of what’s technically possible with the customer’s knowledge of their own needs, pain points, and desires to build better products. They are co-creating on a continuous cadence, supporting the continuous development of their products. They are adapting to changes in customer needs and in technology in real time.
Digital products today are conceived, designed, built, and delivered by a cross-functional team composed of product managers, designers, and software engineers. Product managers make sure that what they are building is viable to the business and valuable to customers. Designers bring visual, interactive, and systems-design chops that help to ensure that customers will understand how to best use a product and delight in that use. Software engineers ensure that the product is reliable, stable and delivers on it’s promise. Collectively (product trio), they are responsible for ensuring that their products create value for the customer in a way that creates value for the business.
The prerequisite mindsets are 1)outcome-oriented, 2)customer-centric, 3)collaborative, 4) visual, 5)experimental, and 6)continuous. As part of continuous discovery customer interviews, usability testing, and A/B testing are pervasive. Continuous discovery: a) At a minimum weekly touchpoints with customers, b) By the team building the product, c) Where they conduct small research activities d) In pursuit of a desired outcome. Product teams make decisions every day. Our goal with continuous discovery is to infuse those daily decisions with as much customer input as possible. Purpose of these customer touchpoints is to conduct research in pursuit of a desired outcome. We are doing research so that we can serve our customers in a way that creates value for our business.
Managers must convert society’s needs into opportunities for profitable business.
Peter Drucker
If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.
Albert Einstein
Pursue business needs by addressing your customers’ needs. Starting with outcomes (customer & business), rather than outputs (features), is what lays the foundation for product success. Product trios tasked with delivering a desired outcome want to pursue business value by creating customer value, they’ll need to work to frame the problem in customer centric way. They’ll need to discover customer needs, pain points and desires (collectively known as opportunities), that if addressed, would drive their business outcome. The opportunity space represents the problem space as well as the desire space. The product trio must discover and explore the opportunity space. The opportunity space is infinite. This is precisely what makes reaching our desired outcome ill-structured problem. How we frame an ill-structured problem impacts how we might solve it. Two most important steps for reaching the desired outcome are first, how we map out and structure the opportunity space, and second, how we select which opportunities to pursue. The right problem framing will ensure that we explore and ultimately ship better solutions.
Start with defining a clear outcome — one that sets scope for discovery. From there we must discover and map out the opportunity space. Finally, we need to discover the solutions that will address those opportunities and thus drive our desired outcome. Visualize it using an Opportunity Solution Tree (OST).
Opportunity solution tree helps you resolve the tension between business needs and customer needs. The key here is filtering the opportunity space by considering only the opportunities that have the potential to drive the business outcome. By mapping the opportunity space, the team is adopting a customer-centric framing for how they might reach their outcome. The outcome and the opportunity space constrain the types of solutions the product trio might consider. When a team takes the time to visualize their options, they build a shared understanding of how they might reach their desired outcome. If they maintain this visual as they learn week after week, they maintain that shared understanding, allowing them to collaborate over time. This collaboration is critical to product success. A continuous mindset requires that we deliver value every sprint. We create customer value by addressing unmet needs, resolving pain points, and satisfying desires (addressing opportunities). The opportunity solution tree helps teams take large, project-sized opportunities and break them down into a series of smaller opportunities. As you work your way vertically down the tree, opportunities get smaller and smaller. Teams can then focus on solving one opportunity at a time. As they address a series of smaller opportunities, these solutions start to address the bigger opportunity. The team learns to solve project-sized opportunities by solving smaller opportunities continuously.
Four villains of decision making that lead to poor decisions are a) looking too narrowly at a problem, b) looking for evidence that confirms our beliefs (confirmation bias), c) letting our short term emotions affect our decisions (we fall in love with our ideas), d) overconfidence. Instead of framing our decisions as “whether or not” decisions, develop a “compare and contrast” mindset. Instead of asking “Should we solve this customer need?” we’ll ask, “Which of these customer needs is most important for us to address right now?” Visualizing your options on an opportunity solution tree will help you catch when you are asking a “whether or not” question and encourage you to shift to a compare-and-contrast question. Product trios need to move forward and act on what they know today, while also being prepared to be wrong. Balance having confidence in what you know with doubting what you know, so that you can take action while still recognizing when you are on a risky path. Most of the decisions that we make in discovery are reversible decisions. Tackle analysis paralysis problem. If we do the necessary work to test our decisions, we can course correct when we find that we made the wrong decision. Learn how to make fast decisions and then quickly test to understand the consequences of those decisions. Visualizing each decision point and the options that you considered on the opportunity solution tree will help you revisit past decisions when needed and that will give you the context you need to course-correct.
The best designers evolve the problem space and solution space together. As they explore potential solutions, they learn more about the problem, and, and as they learn more about the problem, new solutions become possible. These two activities are intrinsically intertwined. The problem space and solution space evolve together. “Based on my current understanding of my customer, I thought this solution would work. It didn’t. What did I misunderstand about my customer?” We then need to revise our understanding of the opportunity space before moving on to new solutions. Product trio should be responsible for both problem space and solution space. By visually mapping out the opportunity space on an opportunity solution tree, a product trio is making their understanding of their customer explicit. When a solution fails, they can revisit this mapping to quickly revise that understanding.
The shape of their tree will help guide their discovery work. The depth and breadth of the opportunity space reflects the team’s current understanding of their target customer. If opportunity space is too shallow, it can guide us to do more customer interviews. A sprawling opportunity space, on the other hand, reminds us to narrow our focus. If we aren’t considering enough solutions for our target opportunity, we can hold an ideation session. If we don’t have enough assumption tests in flight, we can ramp up our testing. While many teams work top-down, starting by defining a clear desired outcome, then mapping out the opportunity space, then considering solutions, and finally running assumption tests to evaluate those solutions, the best teams also work bottom-up. They use their assumption tests to help them evaluate their solutions and evolve the opportunity space. As they learn more about the opportunity space, their understanding of how they might reach their outcome (and how to best measure that outcome) will evolve. These teams work continuously, evolving the entire tree at once. They interview week after week, continuing to explore the opportunity space, even after they’ve selected a target opportunity. They consider multiple solutions for their target opportunity, setting up good “compare and contrast” decisions. They run assumption tests across their solution-set so that they don’t overcommit to less-than-optimal solutions. They visualize their work on their opportunity solution tree, so that they can best assess what to do next.
The key to bringing stakeholders along is to show your work. Summarize what you are learning in a way that is easy to understand, that highlights your key decision points and the options that you considered, and creates space for them to give constructive feedback. A well-constructed opportunity solution tree does exactly this. You can remind them of the desired outcome. You can share what you’ve learned about the customer, by walking them through the opportunity space. Your tree should visually show what solutions you are considering and what tests you are running to evaluate those solutions. You are showing the thinking and learning that got you there. This allows your stakeholders to truly evaluate your work and to weigh in with information you may not have. Your tree will act as your roadmap, helping you find the best path to your desired outcome.
Continuous Discovery Habits
- Shift from an output to an outcome mindset
- Frame, refine, and prioritize the opportunity space
- Generate and evaluate targeted solutions
- Measure the impact of your work all the way to delivery so that delivery fuels discovery
- Manage cycles of discovery, keeping you on track, even when you learn something surprising
- Show your work, bring your stakeholders along throughout the discovery process
Too often we have many compelling goals that all seem equally important.
Christina Wodtke
Improving retention, measured at the 90 day mark, was a core business outcome for tails.com. During customer interviews they realized that they could prevent churn, if they focused on increasing the perceived value of their tailor made dog food and increased the number of dogs that liked their food, two product outcomes that were actionable. Product teams have to do discovery work to identify the connections between product outcomes (the metrics they can influence) and business outcomes (the metrics that drive the business). Translate business outcomes into product outcomes you can deliver, negotiate appropriate product outcomes with your leadership, and determine when to set learning goals vs performance goals. Objectives and Key Results (OKRs) is one flavor of managing by outcomes. A fixed roadmap communicates false certainty. An outcome communicates uncertainty. We know we need this problem solved, but we don’t know the best way to solve it. It gives product trio latitude to explore and pivot when needed. The best teams are adopting an outcome focused mindset.
A business outcome measures how well the business is progressing (e.g., retention). A product outcome measures how well the product is moving the business forward (dogs who like the food). A traction metric measures usage of a specific feature or workflow in the product (owners who use the transition calendar) Business outcomes start with financial metrics (e.g., grow revenue, reduce costs), they are lagging indicator. 90-day retention was a lagging indicator. We want to identify leading indicators that predict the direction of the lagging indicator. Sonja’s team believed that increasing the perceived value of tailor-made dog food and increasing the number of dogs who liked the food were leading indicators of customer retention. Assigning a team a leading indicator is always better than assigning a lagging indicator. Product trios will make more progress on a product outcome than a business outcome. Assigning product outcomes to product trios increases a sense of responsibility and ownership. If a product team is assigned a business outcome, it is easy for the trio to blame the marketing or customer support for not hitting their goal. If Sonja’s team believes more dogs would like the food if their owners had a better transition plan. They could launch transition calendar and measure engagement with that calendar as their traction metric. This strategy assumes that the transition calendar is the right output. Product outcomes give the product trios far more latitude to explore and will enable them to make the decisions they need to ultimately drive business outcomes. Assigning traction metrics to more junior product trios is a great way for a junior team to get some experience with discovery methods before giving them more responsibility. For others, stick with product outcomes. Use traction metrics when you are optimizing a solution and not when the intent is to discover new solutions. A product outcome is a better fit.
Setting a team’s outcome is a two-way negotiation between the product leader and the product trio. The product leader brings across-the-business view and should communicate what’s most important for the business at this time and should identify appropriate product outcome for the trio to focus on. Outcomes are a good way for the leader to communicate strategic intent. Product leader can encourage her to keep focusing on the the number of dogs who like the food broadly. The key is that the leader should not narrow the scope so much that the team is tasked with a traction metric — engagement with the transition calendar. The product trio brings customer and technology knowledge to the conversation and should communicate how much the team can move the metric in the designated period of time. The trio should not be required to communicate what solutions they will build at this time, as this should emerge from discovery. We can increase the number of dogs in that segment who like the food by 10% in the next 3 months. The product leader and product trio can negotiate resources (e.g., adding engineers to the team) and/or remove competing tasks from the team’s backlog, giving them more time to focus on delivering their outcome. A product trio will need some time to learn what might move the metric. This is why a stable product trio focused on the same outcome over time is so critical. Research shows teams who participated in setting of their own outcomes took more initiative and performed better than colleagues who were not involved in setting their outcomes. Teams that set specific, challenging goals outperform teams who don’t. Challenging goals create focus, inspire effort and persistence, and help to surface relevant organizational knowledge. Set SMART (specific, measurable, achievable challenging, relevant, and time-bound) goals.The team has to believe that they can achieve the goal, supporting the idea that teams need to be involved in defining their own outcomes. Challenging goals can decrease performance if the team doesn’t have strategies for how to achieve that goal. Setting an initial learning goal (discover strategies that might work) was more effective than setting a performance goal. Product trios, when faced with a new outcome, should first start with a learning goal (e.g., discover the opportunities that will drive engagement) before being tasked with a performance goal (e.g., increase engagement by 10%). Start with a learning goal and work your way toward a S.M.A.R.T performance goal.
There are four types of product leaders:
- They are asked to deliver outputs not outcomes (most common)
- Product leader sets their outcomes with little input from the team
- Product trio sets their outcomes with little input from their product leader
- Product trio is negotiating their outcomes with their leaders
Connect the dots between the business outcome and potential product otucomes. Can you clearly define how this new initiative will impact a product outcome? Is that outcome leading indicator of the lagging indicator, business outcome? What business outcomes are we trying to drive with this product outome? Clearly communicate how far you think you can get in the allotted time. Ask your product leader for more business context (Who is the target customer for this initiative? What business outcome are we trying to drive with this initiative, Why do we think this initiative will drive that outcome?, What’s most important to the business right now?). Use the information to map out the most important business outcomes and what product outcomes might drive those business outcomes. Choose a product outcome that your team has the most influence over.
- Is your team being tasked with a product outcome and not a buisness outcome or a traction metric?
- Is the traction metric well known? Have you already confirmed that your customers want to exhibit the behavior being tracked?
- Are you starting with a learning goal (discover relevant opportunities (needs, pain points and desires)) before committing to a challenging performance goal?
- Have you set a specific and challenging goal?
Focus on one outcome at a time. Set an outcome for your team and focus on it for a few quarters. It takes time to learn how to impact a new outcome. Make sure your outcome represents a number even if you aren’t sure yet how to measure it. “Increase the number of course views that include reviews.” To shift your outcome from less of an output to more of an outcome, question the impact it will have. In addition to your primary outcome, a team needs to monitor health metrics to make sure that they aren’t causing detrimental effects elsewhere. Their goal is to increase acquisition without negatively impacting satisfaction.
Discover, structure and prioritize opportunity space. Build a shared understanding as you discover the best path to your desired outcome. Experience map, Interview snapshots and Opportunity Solution Tree (OST) are not one-time activities. They’ll continue to evolve as your understanding of your customer’s context (your experience map) and their needs, pain points and desires (the opportunity space on your opportunity solution tree) evolve. Start by building an experience map that reflects what you currently know about your customer. Your experience map will guide you as you interview customers to discover specific opportunities. You’ll capture what you are learning from each interview on interview snapshot. You’ll map out and structure those opportunities on an opportunity solution tree and use the tree structure to help you assess and prioritize the opportunity space.
If we give each other time to explain ourselves using words and pictures, we build shared understanding.
Jeff Patton, User Story Mapping
To chart the best path to a desired outcome, you need to discover and map out the opportunity space. To make sense of the opportunity space, we need to take an inventory of what we already know. This is critical on cross-functional teams, where each member brings a diverse set of knowledge and experiences. When working with an outcome, it can feel overwhelming to know where to start. First map out your customers’ experience as it exists today. Each product-trio member mapped out their own perspective. Each did the best they could. Once they had each created their individual experience map, they took the time to explore each other’s perspectives. The trio quickly worked to merge their unique perspectives into a shared experience map that better reflected what they collectively knew. It contained hunches and possibilities, not truth. It gave them clear starting point. They had made explicit what they thought they knew, where they had open questions, and what they needed to vet in their upcoming customer interviews. This shared experience map will guide your customer interviews and it will help give structure to the opportunity space. It will evolve as your team learns more about your customers. Start with your desired outcome (“What’s preventing our customers from completing their application today?”. We might set our scope broadly “How do customers entertain themselves today?”. For most teams, this scope is too broad. We don’t want to define our scope too narrowly either. “How do customers entertain themselves using our service?” We rule out any inspiration we might get from how they use other streaming-entertainment services, how they entertain themselves through their cable or satellite-dish packages or youtube. “How do customers entertain themselves with video?” we constrain the scope, but not too much. Key is to have a conversation as a team about the scope that gives room to explore while staying focused on your outcome. Once you’ve defined the scope of your experience map, you are ready to take an inventory of your individual knowledge before working to develop a shared understanding of what you collectively know. Start individually to avoid groupthink. It’s critical that each member of the trio start by developing their own perspective before the trio works together to develop a shared perspective. Experience maps are visual, not verbal. When we draw an experience map, rather than verbalize it, it’s easier to see gaps in our thinking, to catch what’s missing and correct what’s not quite right. The goal is not to create a piece of art but to visualize your thinking so that you can examine it. Draw the experience of your customer. As we learn about our customers, we will add far more detail to the map. Do NOT describe context with words. Language is vague. Drawing is more specific. You can’t draw something specific if you haven’t taken time to get clear on what those specifics are. Your goal is to do the work to understand what you know. Once each member of your trio has taken the time to inventory what they know, it’s time to explore the diverse perspectives on your team.
Be curious. Take turns sharing your drawings among your trio. Ask questions to make sure you fully understand their point of view. Don’t worry about what they got right or wrong (from your perspective). Pay particular attention to the differences. Don’t advocate for your drawing. Share your point of view, answer questions, and clarify your thinking. Trio’s shared map was stronger because they synthesized the unique perspectives on the team into a richer experience map than any of them could have individually created. Focus on synthesizing your work together rather than choosing the best drawing to move forward with. Start by turning each of your individual maps into a collection of nodes and links. Links help show relationships between the nodes. I might loop back several times. I choose Netflix but can’t find anything to watch, so I switch to YouTube. Links can show the movement through the nodes. Arrange the nodes from all of your individual maps into a new, comprehensive map. Feel free to collapse similar nodes. Be careful not to generalize so much that you lose key detail. Use arrows to show the flow through the nodes. Don’t just map out the happy path. Capture where steps need to be redone, where people might give up out of frustration, or where steps might loop back on themselves. Once you have a map that represents the nodes and links of your customer’s journey, add context (visually) to each step. Do not get bogged down in endless debate. Drawing really is a magic tool in your toolbox. Use it often. Use boxes and arrows. Stick figures and smiley faces are perfectly ok. Drawing engages different part of your brain than language does. The more you draw, the more you’ll realize drawing is a superpower. Remember, this is your first draft, intended to capture what you think you know about your customer. We’ll test this understanding in our customer interviews and again when we start to explore solutions. As you discover more about your customer, continue to hone and refine this map as a team. Otherwise, your individual perspectives will quickly start to diverge even when you are working with the same set of source data.
Some people say, “Give the customers what they want.” But that’s not my approach. Our job is to figure out what they’re going to want before they do. I think Henry Ford once said, “If I’d asked customers what they wanted, they would have told me, ‘A faster horse!'” People don’t know what they want until you show it to them. That’s why I never rely on market research. Our task is to read things that are not yet on the page.
Steve Jobs
Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it. It is wise to take admissions of uncertainty seriously, but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.
Daniel Kahneman, Thinking, Fast and Slow
Interviewing customers on a regular cadence is critical to the success of any product trio and how to build a habit of interviewing weekly. The purpose of an interview is to discover and explore opportunities (needs, pain points and desires) to intervene in your customers’ lives in a positive way. Use your customers’ own stories to discover their unmet needs. Participants are not lying, we just are not very good at understanding our own behavior. Many of these biases come into play because our interview subjects are trying to be helpful. Michael Gazzaniga named this tendency to rationalize our behavior even when we can’t possibly know as “left brain interpreter.” When information is missing, our brains simply fill in details that make the story coherent. Confidence is not a good indicator of truth or reality. Every customer I talked to told me they wanted to source passive candidates. So, we built a passive-candidate-recruiting solution. It flopped. Recruiters are often measured by how fast they can fill an open role, and active candidates are the fastest way to do that. It’s equivalent of opening a salad bar across from a McDonald’s because customers said they wanted to eat healthier. We can’t be too surprised when our salads lose out to the Big Mac. We asked the wrong questions. We built a product based on a coherent story. It wasn’t a story that was based in a reality. If you want to build a successful product, you need to understand your customers’ actual behavior — their reality — not the story they tell themselves.
You’ll need to translate your research questions into interview questions that elicit these stories. Memories about recent instances are more reliable. “Tell me about the last time you purchased a pair of jeans.” It will reflect their actual behavior, not their perceived behavior. “Tell me about the last time you watched our streaming entertainment service.” “Tell me about the last time you watched any streaming entertainment.” or “Tell me about the last time you were entertained.” This type of question is a great way to uncover what your product category competes with. A narrow scope will help you optimize your existing product. Broader questions will help you uncover new opportunities. The broadest questions might help you uncover new markets. The appropriate scope will depend on the scope you set when creating your experience map. Excavate the story. Set the expectation for a 50/50 back-and-forth pattern. Inform your participant that you would like them to share their full story with you, to share as many details as possible, to leave nothing out, and that, when they are done with their story, you’ll ask for missing details. A good story has a protagonist who encounters experiences on a timeline. Temporal prompts “Start at the beginning. What happened first?”. You can use the experience map you created to help guide your participant. Prompt for the beginning of the story. “Where were you? Set the scene for me.” “What happened next?” “What happened before that?” Thinking of a story as having a beginning, a middle, and an end can help you guide the participant. Listen for specific nodes (from your experience map). Ask about nodes that were left out of the story. Stories also take place in specific locations: protagonists encounter challenges, and they receive help from supporting characters. Other character might present obstacles or interfere with the protagonist’s progress. “Who was with you?” “What challenges did you encounter?” “How did you overcome that challenge?” “Did anyone help you?” Your participant will bounce back and forth between the story and generalizing about their behavior. “I usually ..” You’ll want to gently guide them back to telling you about this specific instance. “In this specific example, did you face that challenge?” Excavating the story takes practice. When your participant jumps to a generalization, it’s going to tempt you to conclude that it is the real need, pain point, or desire. Keep the interview grounded in specific stories to ensure that you collect data about your participants’ actual behavior, not their perceived behavior. Don’t get discouraged. Keep at it. You will get better with time. Golden rule of interviewing is to let the participant talk about what they care about most. You can steer the conversation in two ways. First, you decide which type of story to collect. “Tell me about the last time you watched streaming entertainment.” or “Tell me about the last time you watched streaming entertainment on a mobile device.” If you are concerned with how they chose what to watch, dig into that part of the story. If you aren’t particularly interested in what device they watch on, don’t ask for that detail if they leave it out of their story. Let your research questions guide your story prompts. Some participants might want to share their feature ideas or gripe about how your product works. Capture the value the participant is willing to share, but don’t force it. With continuous interviewing, you’ll be interviewing another customer soon. A disappointing interview is easily forgotten. Synthesize as you go. Use the interview snapshot.
An interview snapshot is a one-pager designed to help you synthesize what you learned in a single interview. Your collection of snapshots will act as an index to the customer knowledge bank you are building through continuous interviewing. Interview snapshots help you identify opportunities and insights from each and every interview. Include a quote or a distinct behavior that stood out. “I’m old school. Agile doesn’t work for me.” Quick facts should help you identify what type of cutomer you were talking to. Be sure to represent opportunities as needs and not solutions. If the participant requests a specific feature or solution, ask about why they need that, and capture the opportunity. “If you had that feature, what would that do for you?” “I wish I could just say the name of the movie I’m searching for,” “What would that do for you?” “I don’t want to have to type out a long movie title.” That’s the underlying need. The need opens up more of the solution space. We could add voice search to address the need, but we also could auto-complete movie titles as they type. Frame them using your customer’s words. Throughout the interview, you might hear interesting insights that don’t represent needs, pain points, or desires. Capture these insights on your interview snapshot. Over time, insights often turn into opportunities. You’ll be surprised how often an opportunity that seems unique to one customer becomes a common pattern heard in several interviews.
As you collect each customer’s unique story, you’ll want to actively listen for how their story is similar to or different from your generalized experience map. One of the most important elements to capture on the interview snapshot is an experience map that captures each participant’s unique story. These stories give us the knowledge we need to design for the right person, in the right context, at the right time. Drawing the nodes and the links that make up the story — will help you remember the story and understand it better. Drawing stories will help you find patterns across seemingly unique stories, which will be critical for making your body of research actionable. Drawing is a super power that will help you unlock valuable insights from each interview. Taking the time to capture visually what you learned from each interview will help you stay aligned as you learn more about your customers. Weekly interviewing is foundational to a strong discovery practice. Interviewing helps us explore an ever-evolving opportunity space. A digital product is never done, and the opportunity space is never finite or complete.
We recently had to pivot from one opportunity to another when we learned that the need we were exploring wasn’t that important to our customers. Because we were continuously interviewing, we killed an opportunity on Tuesday, chose a new one on Wednesday, and used our already scheduled interview on Thursday to learn about the new opportunity.
Raya Raycheva, senior user researcher, Simply Business
If you interview every week, you ‘ll likely be interviewing every week. Interview at least one customer every week. When a customer interview is automatically added to your calendar every week, it becomes easier to interview than not to interview. This is your goal. The most common and easiest way to find interview participants is to recruit them while they are using your product or service. “Do you have 20 minutes to talk with us about your experience in exchange for $20?” If the visitor answers “Yes”, take them to calendly to pick a date and time, which is available for customer interviews and ask their phone number too. Ask your customer-facing colleagues (sales teams, account managers, customer success teams, and customer-support teams). The easiest place to start is to ask if you can join their existing meetings. Start by asking for five minutes at the end of a call. Collect specific story about the customer. You might want to define triggers to help your customer-facing colleagues identify who to reach out to. E.g., If a customer calls to cancel their subscription, schedule an interview. Give them a script to follow “I’d love for you to share your feedback with our product team. Can we schedule 20 minutes for you to talk with them?”. If they say “Yes”, have your colleague schedule the interview. We dramatically underestimate how much our customers want to help. Setting up a customer-advisory board will help, if customers are extremely hard to reach. You can scale the size of your customer advisory board to reflect the number of interviews that your product teams need each month. If you have 3 product teams that each want to do one interview per week. You would invite 12 customers to participate on your advisory board. Product trios interview together to collaborate in a way that leverages everypne’s expertise. The goal is for all team members to be the voice of the customer. A product manager will hear things that an engineer might not pickup on, and vice versa. Each perspective is valid and can lead to an important product improvement. The more diversity in the room, the more value you’ll get from each interview. Everyone has to spend a few minutes (as low as 5 minutes) each week with a customer.
Make sure everyone on your team is well versed in recruiting and interviewing. Generate a list of quetions (what you need to learn), and identify one or two story-based interview questions (what you’ll ask). “Tell me about a specific time when ….. Continue a weekly habit of customer interviewing. Product trio should share what they are learning with their product peers, and with key stakeholders. Synthesize your customer interviews as you go, using interview snapshots.
During customer interviews, they had uncovered that Istrahas (resting places in Arabic) were used as day rentals to host social gatherings near home. During COVID, customers were no longer booking hotel accommodations, demand for Istrahas grew. Their marketplace had little Istraha inventory, and the team needed to figure out how to entice hosts to share their properties and how to get guests consider their platform as the place to book. They started interviewing Istraha hosts and guests. They started mapping out needs of each group in their respective Opportunity Solution Tree (OST). They look for overlap between what hosts and guests each needed. They considered what their competition did well and where they saw gaps where they could compete. As you collect customers’ stories, you are going to hear about countless needs, pain points, and desires. Digital products are never complete. First take an inventory of the opportunity space. If you interview continuously, your opportunity space will always be evolving — expanding as you learn about new needs, contracting as you address known problems, and gaining clarity as you learn more about specific pain points. Mapping the opportunity space is a critical activity. First frame the problem space before we can dive into solving it. Mapping the opportunity space is how we give structure to the ill-structured problem of reaching our desired outcome. Our job is to address customer opportunities that drive our desired outcome. This is how we create value for our business while creating value for our customers. Our goal is to address the customer opportunity that will have the biggest impact on our outcome first. Compare and contrast the impact of addressing one opportunity against the impact of addressing another opportunity. Be deliberate and systematic in our search for the highest-impact opportunity. As the opportunity space grows and evolves, we’ll have to give structure to it again and again. Opportunity backlog is a prioritized list of customer needs, pain points, and desires the same way they prioritize their user stories in their development backlog.
Siblings should be similar to each other but distinct in that you can address one without addressing another. Breaking up big opportunities into a series of smaller opportunities is two fold. First, it allows us to solve problems that otherwise might seem unsolvable. And second, it allows us to deliver value over time. Over time, as we continuously ship value, we’ll chip away at the larger opportunity. We need each opportunity to be distinct from every other opportunity. Two ways to uncover underlying structure of your opportunity space. First, use the steps of your experience map that you created. The second, use your interview drawings to identify key moments in time. Identify distinct moments in time during your customers’ experience. Oftentimes this is as simple as mapping each node in your experience map to the top level of opportunities on your opportunity solution tree. Draw the stories that you heard during customer interviews. You do this by identifying the key moments in time that occured during each story. If you take all these drawings and start to label each key moment (node), you’ll notice patterns across your unique stories. You can then map these nodes to your top-level opportunities. E.g., Deciding to watch something, choosing something to watch, watching something and the end of watching experience.
If you’ve been creating interview snapshots for each interview, you can simply review each interview snapshot. For each opportunity, ask the following questions:
- IS this opportunity framed as a customer need, pain point, or desire and not a solution?
- IS this opportunity unique to this customer, or have we seen it in more than one itnerview?
- IF we address this opportunity, will it drive our desired outcome?
Don’t fret if you need to add parent opportunities. Keep iterating through these steps until you’ve identified a set of siblings that ladder up to the top opportunity (one that reflects a key moment) from which all other opportunities descend. “Structure is done, undone, and redone.” Do just enough to capture what you currently know, and trust that it will continue to grow and evolve over time. Frame each opportunity from customer’s perspective not from company’s perspective. “Can I imagine a customer saying this?” or “Are we just wishing a customer would say this?”. Avoid vertical opportunities. Simply reframe one opportunity to encompass the broader need, and remove the rest. If your top-level opportunities represent distinct moments in time, then no opportunity should have two parents. Sometimes during interview, your customer will ask for solutions. Sometimes they will even sound like opportunities. Ask yourself, is there more than one way to adderss this opportunity? If not it is a solution in disguise. Don’t capture feeling itself as an opportunity. Look for the cause of the feeling. So, do note when a customer expresses a feeling, but consider it a signpost, and remember to let it direct you to the underlying opportunity.
You are never one feature away from success … and you never will be.”
Teresa Torres, Continuous Discovery Habits
The build trap is when organizations become stuck measuring their success by outputs rather than outcomes. It’s when they focus more on shipping and developing features rather than on the actual value those things produce.
Melissa Perri
Our customers care about solving their needs, pain points, and desires. Product strategy happens in the opportunity space. Strategy emerges from the decisions we make about which outcomes to pursue, customers to serve, and opportunities to address. With a well-structured opportunity space, a product trio is well positioned to make strategic decisions about which opportunities to address, which customers to serve, and which path to take toward their desired outcome. Focus on one target opportunity at a time. By addressing only one opportunity at a time, we unlock the ability to deliver value iteratively over time and it also allows the trio to explore multiple solutions, setting up good compare-and-contrast decision. It’s also consistent with the kanban concept of limiting work in progress. You’ll compare and contrast the set of parent opportunities against each other. If your chosen parent is the highest priority, highest-impact opportunity to address next will live under that branch. We’ll keep iterating until we identify a target opportunity that has no children.
Assess opportunities using the following criteria opportunity sizing, market factors, company factors, and customer factors. “Which of these opportunities affects the most customers?” and “the most often?” You can use behavioral data (e.g., site analytics, sales funnel analytics), support tickets, sample surveys, or even your interview snapshots, to quickly evaluate which opportunities are impacting the most customers. A missing table stake in the market could torpedo sales, while a strategic differentiator could open up new customer segments. Company factors help us evaluage strategic impact of each opportunity for our company. We want to prioritize opportunities that support our company vision, mission, and strategic objectives over opportunities that don’t, We want to prioritize important opportunities where satisfaction with the current solution is low, over opportunities that are less important or where satisfaction with current opportunities is high. Consider different dimensions (opportunity sizing, market factors, company factors and customer factors), and make the best decision that you can for this moment in time. Wisdom is finding the right balance between having confidence in what you know and leaving enough room for doubt in case you are wrong. Jeff Bezos describes a Level 1 decision as one that is hard to reverse, whereas a Level 2 decision is one that is easy to reverse. We should be slow and cautious when making Level 1 decisions, but we should move fast when making Level 2 decisions. With a two-way door decision, we’ll learn more by acting — walking through the door and seeing what’s on the other side. Frame the discovery decisions as two-way reversible decisions. If we want to stay open to being wrong and avoid confirmation bias, it’s critical that we think of our prioritization decisions as reversible decisions. Do not delay the decision until there is more data. We’ll learn more from testing our decisions than we will from trying to make perfect decisions. Do not over-rely on one set of factors at the cost of the others. The four sets of factors (opportunity sizing, market factors, company factors, and customer factors) are designed to be lenses to give you a different perspective on the decision. Use them all. Avoid working backwards from your expected conclusion. Go into this exercise with an open mind. You’ll be surprised by how often you come away from it with a new perspective.
Embrace a “compare and contrast” mindset and work with sets of solutions rather than fixating on your favorite solution. Identify the hidden assumptions that are lurking behind each of your ideas, helping you catch blind spots before they can negatively impact your solutions. Test assumptions in a way that helps you quickly throw out what’s not working and iterate on what is.
Creative teams know that quantity is the best predictor of quality.
Leigh Thompson, Making the Team
You’ll never stumble upon the unexpected if you stick only to the familiar.
Ed Catmull, Creativity Inc.
Our first idea is rarely our best idea. Researchers measure creativity using three primary criteria: fluency (the number of ideas we generate), flexibility (how diverse the ideas are), and originality (how novel an idea is). Fluency is correlated with both flexibility and originality. As we generate more ideas, the diversity and novelty of those ideas increases. The most original ideas tend to be generated toward the end of the ideation session. Push beyond our first mediocre and obvious ideas, and delve into the realm of more diverse, original ideas. Not all opportunities need an innovative solution, you don’t need to reinvent the “forgot password” workflow (you should still test it). For the strategic opportunities where you want to differentiate from your competitors, you’ll want to take the time to generate several ideas to ensure that you uncover the best ones. The problems with brainstorming are many. Osborn outlined four rules for brainstorming. One, focus on quantity. Generate as many ideas as you can. Two, defer judgment, and separate idea generation from idea evaluation. Three, welcome unusual ideas. And four, combine and improve ideas. Study after study found that the individuals generating ideas alone outperformed the brainstorming groups. Individuals generated more ideas, more diverse ideas, and more original ideas. When we are in a group, we can rely on the efforts of others (social loafing). Third, brainstorming groups ran into challenges with product blocking. People lose ideas amid the chaos of everyone sharing ideas in rapid succession. Fourth, downward norm setting — Rather than strongest member raising everyone else up, the opposite happens. These factors inhibit the performance of the brainstorming groups as compared to the individuals who generated ideas alone. “the illusion of group productivity.” This is a phenomenon in which groups overestimate their performance. They also report high levels of satisfaction with their work despite their lesser performance. When you are brainstorming alone, you generate more cognitive failures than when you are brainstorming in a group. Ideas from the group help other group members get unstuck. Even with this group advantage, individuals generating ideas alone still generated more ideas, more diverse ideas, and more original ideas than brainstorming groups. Alternating between individual ideation and group sharing of ideas can improve the quality of ideas generated in subsequent individual ideation sessions. Exposure to other people’s ideas did inspire new ideas. Participants started by ideating on their own. Then they shared their ideas with the group. They then went back to ideating on their own. They never ideated as a group, but they received the benefit of hearing each other’s ideas.
- Review your target opportunity.
- Generate ideas alone. Jot down as many ideas as you can. When you get stuck, take a break, and come back to it. If you are still stuck, try to find inspiration from your competitors and analogous products. For analogous products, think broadly.
- Share ideas across your team: face to face or through a slack channel. Key is to describe each of your ideas, allow people to ask questions, and to riff on the ideas.
- Repeat steps 2 and 3. Hearing other people’s ideas will inspire even more ideas. Repeat until you’ve generated between 15 and 20 ideas for your target opportunity.
Once you’ve generated 15 to 20 ideas for the same target opportunity, it’s time to start evaluating your ideas. “Does this idea solve the target opportunity?” Now is the time to weed them out. You’ll dot-vote as a team to whittle your set down from lots of ideas to three ideas. We are better at evaluating ideas as a group. To dot-vote, allot 3 votes per member. As you vote, the only criteria should be how well the idea addresses the target opportunity. Let each person pitch the ideas they voted for. During the pitch, be sure to highlight why each idea best addresses the target opportunity. Continue dot voting until you set aside three ideas. Take a quick poll and make sure everyone on the team is excited about the set you are moving forward with. Each idea should have a strong advocate in the group and everyone should be excited about at least one idea. We will use prototyping and assumption testing to whittle our set down from three to one. Which of these three ideas best delivers on our target opportunity?
You want to make sure everyone has a chance to contribute their ideas. Invite key stakeholders who bring a different perspective. Share your target opportunity and customer context in which opportunity occurs. When selecting three to be in your consideration set, you want as much diversity as possible. If you are stuck, ask “Who else has to solve a problem like this?” and then investigate how they solve it. Let your ideators consider ideas over time. Take advantage of brain’s ability to incubate a problem. Before dot-voting, remove any ideas that don’t address your target opportunity.
We loosely define an iteration in discovery as trying out at least one new idea or approach. To set your expectations, teams competent in modern discovery techniques can generally test on the order of 10-20 iterations per week.
Marty Cagan, INSPIRED
Assume that you are being overconfident, and give yourself a healthy margin of error.
Chip and Dan Heath
Every product team has found themselves facing the hard reality that they spent time, energy, and money building the wrong product. Why? Reduce the chance it will happen to you. We are seeing an interplay of two cognitive biases — confirmation bias and the escalation of commitment. We are more likely to seek out confirming evidence. We forget the data that undermines our perspective. The escalation of commitment is a bias in which the more we invest in an idea, the more committed we become to that idea. We often have to defend our ideas to stakeholders, further entrenching our commitment to our ideas. We tend to seek out why our ideas will work and forget to explore why they might not work. We are often over confident about the success of our ideas. You learned to compare and contrast opportunities, so you aren’t overcommitting to one. You whittled your ideas down to three, so that you don’t over commit to one. Explore how working with a set of ideas will help us compare and contrast the ideas against each other, helping us to avoid confirmation bias and escalation of commitment. Pace, of 10-20 ideas every week, is possible when we step away from the concept of testing ideas and instead focus on testing the assumptions that need to be true in order for our ideas to succeed. By explicitly enumerating our assumptions, we can start to look for both confirming and disconfirming evidence to either support or refute each assumption. Assumption testing is quicker than idea testing, and the faster pace helps us to guard against the escalation of commitment. Less time we invest in an idea, less likely we are to fall in love with it.
Desirability assumptions: Does anyone want it?
Viability assumptions: Should we build it? The idea must create enough value for the business to be worth the effort to create and maintain.
Feasibility assumptions: What’s feasible for our business?
Usability assumptions: Is it usable?
Ethical assumptions: If our customers had full transparency, would they be ok with it?
One of the best ways to align as a team around what our ideas mean is to story map them. Map out each step end-users have to take to get value from a product. Story mapping forces you to get specific about how any idea will work and what you expect your end-users will do. Story mapping will align team around product requirements. It’s also a great technique for helping us to surface underlying assumptions.
Start by assuming the solution already exists. You are story mapping what end-users will do to get value from the solution once it exists. Identify the key actors. In two-sided marketplaces, you might have different types of end-users (buyers and sellers). Chatbot should be listed as a player in the story map. Map out the steps each actor has to take for anyone to get value from the solution. Be specific. What does each actor need to do in order for someone to get value from the solution? Sequece the steps horizontally over time. You may need to jump back and forth between players if they need to take turns taking actions. Map out successful path. If there are multiple successful paths, map them out sequentially.
Can we offer the ability to search for a specific sport or a specific sporting event on our platform? Can the local provider share their listings with us in a format that we can integrate into our own search results? Does the subscriber need to know the game they want to watch is on NBC and they need to search for NBC? We want to story map what we think would be the best solution based on what we know today. We have to make assumptions before we can test assumptions. We’ll have plenty of time to iterate and refine our ideas as we test our assumptions. Use your story map to uncover hidden assumptions. Every time you assume that an end user will do something, you are making desirability assumptions, usability assumptions, and feasibility assumptions. You can literally go step by step through your story map and generate dozens of assumptions. “Our subscriber comes to our platform to watch sports”, we generate following assumptions:
- Desirability: Our subscriber wants to watch sports.
- Desirability: Our subscriber wants to watch sports on our platfrom
- Usability: Our subscriber knows they can watch sports on our platform
- Our subscriber thinks of our platform when it’s time to watch sports.
- Feasibility: Our platform is available when our subscriber wants to watch sports.
Story maps can help us uncover viability and ethical assumptions too. If we’ve done our discovery homework (e.g. continuous interviewing, opportunity mapping, etc.), we’ll understand our customers’ context well, and most of our assumptions will be true enough. We won’t bother testing most of them. By taking the time to generate many assumptions, we’ll increase the likelihood that we’ll uncover the risky ones. Prioritize and identify the riskiest assumptions.
Conduct a Pre-Mortem. Pre-mortems happen at the start of the project and are designed to suss out what could go wrong in the future. Pre-mortems are a great way to generate assumptions. “Imagine it’s six months in the future; your product launched, and it was a complete failure. What went wrong?” You are exposing assumptions that your idea depends upon that may not be true. It is critical to phrase the question as if the outcome is certain, that means we have to consider that the product did fail, not that it might fail.
Use the opportunity solution tree to work backwards from your solution back to your outcome to generate viability assumptions. This solution will address the target opportunity because … Addressing the target opportunity will drive the desired outcome because … Be specific. Why will your solution address the target opportunity? Why does addressing the opportunity “I want to watch live sports” drive the outcome “Increase weekly viewer minutes”? “People will watch sports in addition to what they already watch.” “Even if people cut out other shows, sporting events are long, and their individual viewing sessions will be longer.” “If individual viewing sessions are longer, weekly viewing minutes will go up.” “People who watch more minutes are more likely to renew.” “The cost of adding local channels will be offset by the gain from more renewals.” The goal is to capture the logical inferences behind why you think this solution will address your target opportunity in a way that drives your product outcomes and ultimately, your business outcome. Each inference is an assumption you can test. “What’s the potential harm in offering this solution?” “What data do we plan to collect?” “Does our product have the potential to become addictive?” “Are there people who are being left out?” “Are we exposing someone’s identity who might need anonymity for their own safety?” “Are we spending time building the wrong stuff, therefore losing out on more compelling opportunities?” “If the Wall Street Journal ran a front-page story about this solution that included your internal conversations about how the solution would work, what data you collected, how you used it, and how different players in the ecosystem benefited or didn’t, would that be a good thing? If not, why not?”
Story mapping, pre-mortems, walking the lines of your opportunity solution tree, and questioning potential harm will help you start to see your own assumptions.
Leap of faith assumptions are assumptions that carry the most risk and thus need to be tested. You are mapping assumptions relative to each other. Go fast. Start with 2 or 3 assumptions that fall in the upper right corner. Story map, generate assumptions, and assumption map for each of your three ideas. Learn how to quickly test your “leap of faith” assumptions for each of your ideas so that you are able to compare and contrast the ideas against each other. Avoid not generating enough assumptions. You won’t need to test all of these assumptions. You’ll use the assumption mapping exercise to quickly find the riskiest ones. Positive framing your assumptions will make them easier to test. “Customers will remember their passwords.” Be specific. “Customers will take the time to browse all the options on our getting-started page,” “Customers will know how to select the right option based on their situation,” and “Our engineers can identify the right subset of options to show the customer based on customer’s profile data. Some teams conflate desirability and usability and forget that just because a product is usable doesn’t mean it’s desirable. It can be hard to remember to first test and see if customers even want the solution. Use the categories (usable, feasible, valuable, viable, and ethical) to catch your blind spots.
Each answer a team collects — positive or negative — is a unit of progress.
Jeff Gothelf and Josh Seiden, Sense & Respond
Teams learn to identify assumptions one week and then learn to test assumptions the following week. Test assumptions, not ideas. It’s easy to rush into experimenting before we are ready. Learn how to slow down to make sure you get more value from each and every assumption test. We want to systematically collect evidence about our assumptions underlying all three ideas. The more we learn about each idea, the more likely we are to compare and contrast the ideas against each other. We are looking for a clear front runner. We are NOT testing one idea at a time. We are testing assumptions from a set of ideas. Our goal is to collect data that will help us move the assumption from right to the left on our assumption map. Our goal is to collect more evidence. Focus on collecting data about what people actually do in a particular context. “I want to watch sports” and we’ve brainstormed a set of solutions — adding local channels, licensing events directly from sports leagues, and bundlinig our service with a sports provider. “Our subscribers want to watch sports,” we might simulate the moment when someone is browsing their sporting options, trying to decide what to watch. We can mock a “home screen” that users see when they turn on the streaming-entertainment service. If we are trying to test the assumption “Our subscriber wants to watch sports on our platform.” We might simulate the moment in time when the big game is about to start. We might present them three subscription services including ours, tell them that the game is available on all three services, and ask them to choose a service to stream the game. “Our subscriber wants to watch sports.” This is an assumption that is core to the target opportunity, if this assumption is false, we can abandon our set of ideas. Define what success looks like. If our assumption is true, what would we expect the participant to do? If our assumption is true — we would expect at least some of them to choose a sporting event in our simulation. If our assumption is true — we would expect some of them to choose our service over the competitors. “At least 3 out of 10 people choose sports.” By defining these criteria, First, you are aligning as a team around what success looks like so that you all know how to interpret the results. And second, you are helping to guard against confirmation bias. Define what success looks like upfront (before we see the results). Test your assumption with as few people as possible but with the number of people that still gives your team the information they need to act on the data. You are NOT trying to prove that this assumption is true. You are simply trying to reduce risk. Your goal is to move the assumption from right to left. We don’t want to invest the time and energy into an experiment if we don’t even have an early signal that we are on the right track. Start small. You can learn a lot from getting feedback from a handful of customers. If we have another assumption on our assumption map that is now riskier, we want to switch gears and test that assumption. But if this assumption continues to be our riskiest assumption, and it carries more risk than our organization can stomach, then we need to continue to test it. Define the next-level experiment that will allow us to collect more data. Smoke screen test. Our first test was designed to be completed in a day or two. This test will take up to two weeks. Most of our learning comes from failed assumption tests. Small tests give us a chance to fail sooner. Failing faster is what allows us to quickly move to the next assumption. Good tests kill flawed theories. giving us another chance to get it right. We want to select a variation in geographic location, demographics, tv watching behavior etc. as best as we can. More likely we’ll get conflicting results. We’ll see one assumption fail and another one pass. We will run additional experiments to evaluate our assumption. This isn’t very costly false negative, as long as we keep our iterations and our future testing small. Ideas and opportunities are abundant. A false positive is when our test gives us data suggesting that our assumption is true, when it isn’t. Suppose we run our small test, and we learn that everyone wants to watch sports, so we call our test a success, and we move forward. We aren’t making a go/no-go decision based on one assumption test. We are either moving on to test another assumption related to the same idea, or we are running a bigger, more reliable test on the same assumption. False positives get surfaced in successive rounds of testing. There is a cost to false negatives and false positives. But the cost is not so great that we should be starting with large-scale quantitative experiments every time. Product teams are NOT scientists and are NOT creating new knowledge. We are trying to create products that improve our customers’ lives. When we launch, we get to see how our customers interact with it. This is a fantastic feedback loop. We work on much faster cycles than science Our goal as a product team is not to seek truth but to mitigate risk.
Best teams conduct 15-20 discovery iterations a week. With the right mindset, tools, and methods, it can quickly become a reality. Unmoderated user testing and one-question surveys. Unmoderated user testing services allow you to post a stimulus (e.g., a prototype) and define tasks to complete and questions to answer. Participants then complete the tasks and answer the questions on their own time. You get a video of their work. These types of tools are game changers. You can post your task, go home for the night, and come back the next day to a set of videos ready for you to watch. Once results come in we would simply have to watch the videos and record how many chose sports in the first assumption test and how many chose our subscription service in the second assumption test. Some unmoderated testing tools also allow you to upload your own list of participants. When using one-question surveys, ask about specific instances. We are asking about the last week and the last month. Unmoderated testing and one-question surveys are NOT the only ways to test assumptions. We might look at how many of our current subscribers have searched for sports on our platform and use this as an indicator of interest in sports. Be sure to define your evaluation criteria upfront. How many search queries will you sample? How many need to be related to sports? How will you determine “related to sports”? Remember, aligning around success criteria upfront guards against confirmation bias and ensures that your team agrees on what the results mean. Product teams can typically test most of their assumptions with a combination of prototype tests (either unmoderated or in person), one-question surveys, or through data-mining. If you keep the simple assumption-simulate-evaluate framework in mind, you’ll be well on your way to becoming a strong assumption tester. Avoid overly complex simulations. You are looking to design fast tests that will help you gather quick signals. Design your tests to be completed in a day or two, or a week, at most. Use specific numbers instead of percentages when defining evaluation criteria. When testing with small numbers, we can’t conclude that 7 out of 10 will continue to mean 70% as our participant size grows. Be explicit from the get go about how many people you will test when defining your success criteria. Define enough evaluation criteria. Complex actions may require multiple measurements (e.g,, opens the email, clicks on the link, takes an action). Avoid testing with the wrong audience. Make sure that your participants experience the need, pain point, or desire represented by that target opportunity. Recruit for variation. Don’t just test with the easiest audience to reach or the most vocal audience. Design your assumption tests such that they are likely to pass. If your assumption test passes with the most likely audience, then you can expand your reach to tougher audiences. You’ll be surprised how often your assumption tests still fail. If you fail in the best-case scenario, your results will be less ambiguous.
Your delusions, no matter how convincing, will wither under the harsh light of data.
Alistair Croll and Benjamin Yoskovitz, Lean Analytics
Would college students trust our recommendations? Would they be confused by our unique interface? Can we collect enough feedback to continue to refine our algorithm? We wanted to start small, as this idea was full of risk. We started by delivering a small percentage of our traffic to a new search page. Students entered their area of study, and we ran the relevant “saved search” behind the scenes. We were able to get this working prototype live in just a few days. We then watched what happened. In our traditional interface, only 36% of students started a search. With our new “Tell us what you studied” interface, 83% of our visitors started their search. Our new questions were much easier to answer, so more students were able to start their search. So our overall performance was much better in the new interface. We knew right away that it was safe to keep investing in this idea. Did our new idea drive our desired outcome? Our desired outcome was to increase the number of students getting jobs through our platform. We thought if we could get more students starting their search, we would increase the number of students who found jobs on our platform. We decided we need to continue to split our traffic until we could confirm that our new interface supported our desired outcome. First, it’s easy to get caught up in successful assumption tests. An outcome-focused product trio needs to stay focused on the end goal — driving desired outcome. Remember to measure not just what we need to evaluate our assumption tests, but also what we need to measure impact on our outcome. Our discovery required that we start delivery. Measuring the impact of that delivery resulted in us needing to do more discovery. Discovery feeds delivery and delivery feeds discovery. Inevitably, as your experiments grow, you are going to need to test with a real audience, in a real context, with real data. Testing in your production environment is a natural progression for your discovery work. If you instrument your delivery work, discovery will not only feed delivery, but delivery will feed discovery. Learn how to instrument your product so that you can evaluate assumption tests using live prototypes. Learn how to measure the impact of your delivery work, using your desired outcome as your North Star. Learn how to keep your discovery and delivery tightly coupled so that you never have to wonder if you are ready for delivery.
Do NOT measure everything. Trust that you’ll learn as you go. Start small, and experiment your way to the best instrumentation. Instrument your evaluation criteria. Start by instrumenting what you need to collect to evaluate your assumption tests. As you build your live prototypes, consider what you need to measure to support your evaluation criteria. We had several assumptions we needed to test.
- Students will start more searches if we ask them easier questions
- Students will view jobs that we recommend
- Students will apply to jobs that we recommend.
We defined evaluation criteria for each assumption:
- 250 out of 500 visitors will start their search using our new interface.
- A minimum of 63 out of 500 students will view at least one job.
- At a minimum 7 out of 500 students would apply for a job.
With this evaluation criteria, here’s what we measured:
- # of people who visited the search start page
- # of people who started a search
- # of people who viewed at least one job
- # of people who applied for at least one job
We didn’t track every click on every page. We started with our assumptions, and we measured exactly what we needed to test our assumptions. Measure impact on your desired outcome. Measure what you need to evaluate your progress toward your desired outcome. We lost track of students after they applied for a job. The post-apply steps like interviewing, receiving an offer, and accepting an offer all happened off of our platform. We needed to find a way to incentivize students to tell us when they got a job or employers to tell us when they made a hire. We needed to measure when a student got a job. We could use this lack of knowledge to help us measure what happens after they completed an application. 21 days after student applied for a job, we sent the student an email and asked them what happened. Only 5% of job applications neetted a reply to the email. We grew that to 14%, and by the time I left, we were at a 37% response rate. We knew that, if we were relentless, we would find a way to track our desired outcome. We weren’t afraid to measure hard things. Our product outcome was to improve search starts, and we succeeded in doing that. Our business outcome was to increase the number of students who found jobs on our platform. We had to continue to instrument our product to evaluate if driving our product outcome had the intended impact on our business outcome. Our discovery continued through to delivery. To test if adding sports will drive our product outcome (to increase average minutes watched) and our business outcome (to increase subscriber retention), we’ll need to find small ways to experiment with real data, in our production environment. We could partner with a local channel to stream one sporting event on one day and evaluate the impact on viewing minutes for the subscribers who watched that sporting event. Starting with one event might allow you to circumvent a good chunk of that work, allowing both parties to test their assumptions before they commit to a longer-term agreement. Remember to track the long-term connection between your product outcome and your business outcome. Our goal is to satisfy customer needs while creating value for our business. We are constrained by driving our desired outcome. Desirability is not enough. Viability is the key to long-term success. Connection between our product outcome and our business outcome is a theory that needs to be tested.
Trusting the process can give you the confidence to take risks.
Chip and Dan Heath, Decisive
We started by defining a clear desired outcome, we interviewed to discover opportunities, we visually captured (interview snapshot) and synthesized what we were learning with experience maps and opportunity solution trees, we prioritized a target opportunity, we brainstormed solutions, we identified our hidden assumptions, we rapidly tested those assumptions, and we continued to measure impact all the way through delivery. Most of the work in discovery is not following the process — it’s managing the cycles. Throughout the course of their discovery work, learned something surprising along the way. The surprise required that they stop charging forward and instead loop back to a previous step. Pay particular attention to how and when the teams had to loop back to an earlier habit to help them get unstuck or to work around a new constraint. Notice it was the same discovery habits that helped them find their way.
Though they heard late payments come up as a pain point interview after interview and they had market research reinforcing what they had heard, the results of their assumption tests were clear. Their customers did not want Simply Business to help them with this problem. They invested only a week into an opportunity that wasn’t going to be fruitful. These course corrections should be celebrated. The fruit of discovery work is often the time we save when we decide not to build something.
Iteratively tackling small opportunities can add up to have a large impact on an outcome. When first getting started, it can be hard to see how starting with a small problem will ever amount to anything. If you keep at it and work the cycles, small changes start to snowball, and you start to see the collective impact of working across your tree. Addressing sub-opportunities over time eventually addresses parent opportunities. Addressing parent opportunities is your path to consistently driving product outcomes. Avoid overcommitting to an opportunity. Hardest challenge with opportunity selection is identifying right opportunity for right now. We want to run quick tests rather than overinvest in the best tests.
Stop avoiding hard opportunities. If we can deliver impact this week, we should. Many of the opportunities we uncover will take time to address adquately. Don’t confuse quick testing and iterative delivery with easy solutions. At AfterCollege, we were able to find a quick test of a hard solution. Before we invested months into building a robust machine-learning solution, we started with a crude approximation that we could prototype in a few days.
Discovery requires strong critical thinking skills. It’s easy to draw fast conclusions from shallow learnings. Once hearing that customers valued picking up the phone to talk to their loan officer, Carl’s team could have abandoned their digital-engagement strategy. They asked the harder question, “How can we reconcile our business need with our customers’ needs?” As a result, they found an opportunity where their customers did want to engage digitally, and they used that opportunity to grow their digital relationship with their customers. They did the work to uncover the depth behind their shallow learning.
Do not give up before small changes have time to add up. While you do want to measure the impact of your product changes, don’t expect to see large step-function results from every change. It takes a series of changes to move the needle on our outcome.
The more leaders can understand where teams are, the more they will step back and let teams execute.
Melissa Perri, Escaping the Build Trap
What customers ask for isn’t always what they need. Product team didn’t want to spend time, money, and energy building the wrong feature, so instead, they turned to their discovery habits to help them out. Airship helps their customers send the right message to the right user at the right time on the right channel. Lisa’s team was asked to build a customer-journey builder to help Airship win deals against competitors, they knew they had a lot to learn. They started by interviewing their own customers who also used a competitor’s journey builder. A journey builder allows a marketer to sequence marketing messages over time and across channels. It became clear that building what the competitors offered wasn’t going to be good enough. This was a huge opportunity to differentiate their offering from the competitors. Lisa’s team interviewed customers, mapped out the opportunity space, explored multiple solutions, prototyped to test their assumptions, and landed on a solution that they were excited about. The sales team pushed back. Lisa and her team were able to convince their leadership to allow them to run a one month beta launch with a limited set of customers to test out their new feature. After successful beta release, the Airship Journeys product launched and has seen great success. It’s not enough to do good discovery if you aren’t bringing your stakeholders along with you. Use the same visual artifacts to help you manage and bring stakeholders along, so that, when you land on a better solution, the organization is ready and eager to adopt it.
It’s our job to do discovery, not our stakeholders. Do not jump straight to conclusions. Slow down and show your work. Opportunity solution tree can help you share your work with your shareholders. Set the context for how product decisions are made. Just like OST helped you and your team build confidence in your decisions, it will do the same for your stakeholders. Start at the top of your tree. Remind your stakeholders what your desired outcome is. Ask them if anything has changed since you last agreed on this outcome. Share how you mapped out the opportunity space. Highlight the top-level opportunities. Drill into the detail only when and where they ask for it. Capture their suggestions. You can always vet them in your future customer interviews. Help them understand the customer need or pain point you intend to address. Use your interview snapshots to help your stakeholders empathize with your customers. Answer their questions. Your stakeholder needs to fully understand the opportunity before you share solutions with them. Ask them if they have any of their own ideas. Capture and consider them. Share the set of three solutions you plan to move forward with. Ask them if they would have chosen a different set. Share your story maps and your assumption lists. Make your stakeholders fully understand how each solution might work. Remind your stakeholders what your target opportunity is. Ask your stakeholders to add to your assumption lists. Share your assumption map. Add any of the assumptions that your stakeholders identified. Ask them if they would have prioritized the assumptions differently. Make adjustments as needed. Share your assumption tests and the data or execution plans. Consider and integrate their feed back. Repeat. Share your work along the way. When we take time to show experience maps, opportunity solution trees, and story maps, we are inviting our stakeholders along for the journey with us. We are presenting the potential paths we might take to get to the desired outcome. We are inviting them to co-create with us, which leads to buy-in and long-term success.
Avoid telling instead of showing. Show stakeholders so that they can draw their own conclusions. The key to avoid the “curse of knowledge” is to slow down. Start at the beginning. Walk your stakeholders through what you learned and what decisions you made. Give them space to follow your logic, and give them time to reach the same conclusion. Do not give stakeholders all the messy details. What does this stakeholder need to know? Even with a busy CEO, you still want to start with the outcome you are driving, highlight the top 2 or 3 opportunities, give a quick explanation of why you chose the one you did, highlight your top solutions and share the results of 1 or 2 assumption tests that support your final decision. “Our goal is to reduce the number of lost sales from not having a journey builder (outcome). We interviewed customers and learned that existing journey builders are too complex; marketers don’t know how to get started (opportunity #1), their journeys are hard to maintain (opportunity #2), and they often create redundant journeys (opportunity #3). We decided to focus on reducing the complexity by helping marketers to zoom out from the messages they are sending to focus on the goals they are trying to achieve. We explored few different ways to do this, but the most promising one is our life-cycle maps idea. In testing, we found that marketers had no problem getting started and that they loved the high-level view on their work.” CEO needs to know that team is finding a solution that cusotomers love that drives the outcome he cares about.
Help your stakeholder reach that conclusion on their own. You can do this by story mapping their idea together. Generate assumptions together. When your stakeholder sees what assumptions their idea is based upon, you can now share what you’ve learned about those assumptions in your past assumption tests. This helps your stakeholder reach their own conclusions about their own ideas. You need to take stock of the decision that needs to be made and focus on the best outcome, given what you have to work with. Show, don’t tell.
Developing Your Continuous Discovery Habits
If I’m going to do good design work, I need to get close to my customer. Start small. Iterate from there. Build your Trio. If you are a product manager, find a designer and an engineer to partner with. Consult them on key decisions. Work together to decide what to build. If your company doesn’t hire designers, find someone who is design minded. Look for people who are good at simplifying complex concepts, have firsthand experience with your customers, and have an abundance of empathy for your customers’ challenges. How can I include all three disciplines in as many discovery decisions as I can? Make next week look better than last week. Repeat. Keystone habit of continuous discovery is to start talking to customers. The keystone habit builds motivation for the subsequent habits. This exact pattern emerges among product teams who develop a weekly habit of customer interviews. When product teams engage with their customers week over week, they don’t just get the benefit of interviewing more often — they also start rapid prototyping and experimenting more often. They remember to doubt what they know and to test their assumptions. They do a better job of connecting what they are learning from their research activities with the product decisions they are making. Continuous interviewing is a keystone habit for continuous discovery. This is it.
Keystone habits start a process that, over time, transforms everything.
Charles Duhigg, The Power of Habit: Why We Do What We Do in Life and Business
Find a single customer to talk to. Start by talking with someone who is similar to their customers. Use each conversation to get introduced to another person to talk to. Make next week better than last week. Find yourself on a path to continuous interviewing. No matter your situation, this is the habit to start with. Best time to advocate for discovery is when a feature falls short of expectations. you can make great strides yourself, focusing on how you work.
Use your Retrospectives to reflect on your discovery process. Add a couple of reflective questions to this meeting to also reflect on your discovery process. “What did we learn during this sprint that surprised us?” We learned a new insight in a customer interview, or we ran into a feasibility hurdle that required us to redesign a solution. Make a list. “How could we have learned that sooner?” Answers to these questions will help you improve your discovery process. Was there a faulty assumption that you neglected to uncover? Did it not get prioritized as one of your “leap of faith” assumptions for testing? Surprises help us improve. Take the time to learn from them.
Avoid focusing on why a given strategy won’t work (“That will never work here”), instead of what is within your control. Do not be the annoying champion for the “right way” of working. There is no “one right way” to do discovery. Don’t let perfect be the enemy of good. Adopt a continuous-improvement mindset. If next week looks better than last week, you are on the right track. Start with what is in your control. Get strated by talking to anyone who is like your customer. Iterate from there.