Structured Research 101
Every designer knows that designing on assumptions will only get you to a decent solution at best and a failed solution at worst. User research is how we validate those assumptions and build solutions we can prove will work.
We know research is necessary, but what exactly is it? It goes beyond just interviewing users for feedback or running an A/B test. While those are valid research methods, having access to a wide range of methods will empower you to get the data you need to take your product design to the next level.
As a product designer, you may not always be the one performing user research. However, having a firm grasp of the research methods used will enable you to collaborate with your product strategist or researcher and allow you to jump in to help when needed.
When is user research required?
User research will be deeply integrated into every phase of the design process — especially if your team is following the practices of Lean UX. Research and testing throughout the design process allow us to make better designs faster. However, the methods you use will vary depending on the phase of the design process you’re in.
Understanding
Whether you’re trying to create an entirely new product or just a new feature — taking your time to research during this phase will be essential to the success of the following stages.
While not always the case, you will primarily focus on qualitative and attitudinal research methods during this phase.
Definition
The definition phase synthesizes everything you learned in the understanding phase. Therefore, you will generally not need to employ any research methods during this phase.
Ideation
During the ideation phase of design, you will start moving into more behavioral research methods. Getting wireframes and conceptual work in front of users will help you rapidly test assumptions to determine what you should ultimately spend time building out into a prototype.
Prototype
During the prototyping phase, you will be moving more heavily into behavioral and quantitative methods. You’ve researched the problem, validated your assumptions, and refined your tested concepts into a working prototype. Now it’s time to fine-tune what you’ve made and ensure users understand how it works and what it’s for.
Implementation
This is where the rubber meets the road, and you get to see your solution in action. Research must continue into this phase. Testing after launch can take a variety of forms. A/B testing, eye tracking, heat maps, KPIs, and the list goes on. This is where testing gets truly mathematical, and you will use data to determine overall success and areas that need improvement.
How to get buy-in to research and test
So we all get it — user research is essential, and not taking the time to perform it will waste both time and money on designing a solution that might not even work. It should be easy to make sure research is accounted for, right? Not always.
All of our projects would have plenty of time and resources allocated for user research in a perfect world. But, unfortunately, we don’t live in an ideal world. If you’ve spent any time on a product team, then you’ve probably run into scenarios where testing was needed but not invested in due to time, resources, or a lack of understanding around why it’s needed.
So how do you make sure you get buy-in? It boils down to the right timing and communicating both the need and value of research.
Timing your ask
It’s important to think a few steps ahead. Are you able to get in the room before design and development deadlines are hard set? Do it — advocate allocating time to researching. The chances of you getting buy-in to research after those timelines are established are slim to none, depending on the timelines you’re working with.
The same goes for if you’re in the middle of a project. Do you know you want to test prototypes before handing them to developers? Make sure you advocate for that before you finish your prototypes. Even when you begin to wrap up a project — make sure you’re advocating for testing post-launch. Early communication is critical.
Communicating the value of research
Communicating the value of research is just as important as timing. No one is going to allocate time and resources to something that doesn’t add value. Therefore, it’s crucial that you explain the importance of user research. Make it clear that proceeding without research would almost certainly waste time and money. One of my favorite phrases summarizes this nicely.
Slow down to go fast.
What does that mean? It means that when we take a few steps back and don’t rush towards the finish line, we can make better decisions earlier on. As a result, we avoid costly and time-consuming rework, losing customers due to not effectively meeting their needs, and getting to the right solution far faster.
User research methods
Now to answer the big question: how do you perform effective user research? There’s not a prescriptive answer. However, I can tell you the different methods and their general use. Every problem is unique. The techniques you employ at the various stages of the design process will vary depending on the problem you’re trying to solve, how quickly you need to solve it, what resources you have available, and even what kinds of users you’re working with.
Understanding research intent and outcome
The key to determining what methods you will employ is understanding your research intent and your intended outcome.
Attitudinal vs. Behavioral
Your research intent can generally fall under two categories. First, ask yourself, do you want to understand what users think, or do you want to understand what users do?
As the name suggests, Attitudinal research is research that gives you insight into how the user thinks and feels about something. You can use attitudinal research at any stage of the design process. However, it is most helpful when you are attempting to understand and define the problem space.
Behavioral research is research that allows you to see what users do. This method can be leveraged to test concepts, prototypes and gauge the success of designs in production. In addition, behavioral research is fantastic for testing usability and identifying any friction points.
Qualitative vs. Quantitative
All methods produce data; however, the format and use of that data are divided into either quantitative or qualitative. Understanding your desired outcome will help you determine which of these method types you should employ.
The outcome of qualitative research focuses on concepts, feelings, and explanations as to why a user feels the way they feel or does the things they do. Thus, qualitative data will help you put together the user’s story and uncover their needs.
Quantitative data is far more objective. It is the documentation of what a user does. It is generally numerical data collected during a user’s real interactions with a product. Thus, quantitative data is fantastic for usability testing and validating design decisions.
Both of these research types provide outcomes that inform and validate design decisions. They are often used in conjunction to get the full story about users and truly validate assumptions. While both certainly inform and validate, qualitative data is generally employed to inform design decisions, and quantitative is usually used to validate in-production design decisions.
Qualitative
While this is not an exhaustive list, I’ve listed the more common qualitative research methods below. Remember, combining research methods and adapting them to your needs will help you get the most value from your research. As with everything, these are just guidelines, not prescriptive rules.
Interviews — Attitudinal
Interviews are arguably one of the most common research methods. They can be as short or as long as you need and can provide many insights to inform your designs. Interviews are when a researcher meets with a user and asks them questions about the problem they’re trying to solve.
You should use interviews if:
- You want to build or validate user personas
- You want to get data to inform user journeys
- You want to get data to inform ideas
- You want to better understand the problem space
Contextual interviews — Attitudinal & Behavioral
Contextual interviews, otherwise known as participant observation, are a more hands-off type of testing. This method involves watching a user interact with a product and then following up with questions and observations about their product use. Contextual interviews are a fantastic way to test a product’s usability and gain a wide variety of insights in a short amount of time.
Focus groups — Attitudinal
Focus groups are best used in conjunction with other research methods. They involve discussing a problem or product with a group of 6–12 people. The moderator will either show a product demo or ask questions regarding a specific topic. Participants can respond to the moderator as well as engage in dialogue with other participants.
This research method is a great way to inform your designs during the beginning phases of exploration.
Field Studies — Attitudinal & Behavioral
A field study involves observing users in the environment they would typically interact with your product in vs. in a lab or over a remote test. Field studies can significantly vary in implementation depending on the problem you’re trying to solve. They can be purely observational or involve a level of interviewing users about the task at hand.
While field studies have many uses, they can be especially beneficial when considering “phygital” interactions. A great example of this is when I worked for a large retailer. Field studies were used to understand how people interact with the app/website in-store vs. at home.
Diary Studies — Attitudinal & Behavioral
Diary studies require a high level of engagement from participants over an extended period. It involves users documenting real-time thoughts, needs, and actions while performing a task or their day-to-day activities. Diary studies are great for when you want to understand users’ behaviors or habits over time which can be challenging to simulate in a lab or other controlled environment.
Quantitative
This is by no means the complete list of quantitative research methods; however, it covers the more commonly used methods. Remember to adapt these to your unique problem and approach.
Product analytics — Behavioral
This method generally requires a product that has made it into production. Once you have real users interacting with your product, you can use analytics tools such as FullStory, Google Analytics, and HotJar, to name a few. The metrics you track will vary depending on your goal and the product.
Eye-tracking — Behavioral
This method generally requires an in-production product; however, you can also employ this method for prototypes depending on the tools/resources you have available. Eye-tracking is a great way to gain quantitative insights into what users pay the most attention to on the page.
Heat mapping — Behavioral
A heat map generates a visual picture as to where and how your users interact with the product. Similar to eye tracking, you will most likely use this on features and products already in production.
Multi-variant testing ( A/B Testing ) — Behavioral
Once again, you will most likely be employing multi-variant tests on products already in production. However, there may be instances where this method is used during the prototyping phase. Running multi-variant tests is a great way to test slight variations in a design to determine which will bring you to the desired outcome. Multi-variant tests are often used in marketing sites to test which variants drive more conversions.
Usability benchmarking — Behavioral
Usability benchmarking is slightly different from the more qualitative version of usability testing. Instead of gauging overall usability and validating task flows, this is to determine benchmark statistics about key tasks. Benchmarking is key to tracking progress over iterations. Some examples of usability benchmarking include:
- The average time to purchase
- Customer retention rate
- The number of task abandonments
Card sorting — Behavioral
Card sorting is used to understand how users group information. This method is generally utilized to determine page and site architecture. A typical card sorting session will involve a moderator asking participants to organize and group cards in the way they think makes sense.
Tree testing — Behavioral
Tree testing goes hand-in-hand with card sorting. The key difference is the outcome. The outcome of card sorting is used to inform your information architecture, while tree testing is used to validate your information architecture.
A tree test will involve a tree (a chart of your information hierarchy) and a task. A moderator will ask participants how they would go about approaching the task using the tree. You do not need wireframes or sketches for this test.
Surveys — Attitudinal
Surveys can be a great way to get data regarding your user’s experience with your product. The most common quantitative surveys are customer satisfaction surveys. For quantitative results, it’s important to avoid asking open-ended questions.
First-click testing — Behavioral
First-click testing can be used to validate the structure of your experience. For example, if a user lands on a page, they should immediately find the task/information that matches their intent. Thus, first-click testing is a great way to test efficacy by determining if the user initiates the right task or the wrong task depending on what you intend users to do on that page. You can perform this test on wireframes, prototypes, or live products.
How to be a good moderator
Being a good moderator starts with being able to determine whether or not active moderation is needed. Not all tests require moderation. Whether or not you moderate will depend on your time, resources, and desired outcomes.
Unmoderated testing
There are upsides and downsides to unmoderated testing. The key thing to consider here is the phenomenon known as the Hawthorne effect. The Hawthorne effect is when individuals modify or change an aspect of their behavior when they become aware of being observed.
Unmoderated testing does well when you need quick and affordable insights into users’ actions when interacting with your product. However, your research should not be solely unmoderated as you cannot gain deeper insights. Unmoderated testing does best in conjunction with moderated testing.
Moderated testing
Moderating testing is a great way to gain in-depth insights about your users and how they interact with your product. They allow you to ask follow-up questions, get both attitudinal and behavioral data, and get a fuller picture of the user journey.
Moderated testing is excellent to use during all phases of the design process; however, it particularly suits the pre-implementation stages.
Moderating best practices
The best way to become a good moderator is by actually moderating tests. Moderation is an art and a science. It requires adapting to new information, keeping the participant on track, and developing trust with the participant during a very short amount of time. Listed below are some moderation best practices. Remember, best practices are only meant to inform your moderation style — as you practice, you’ll develop your rhythm of moderation.
Building rapport
One of the most important things you can do as a moderator is, first and foremost, to be friendly. It’s easy to just jump straight into the test or interview, especially when you’re on a time crunch. However, you’re going to get more honest and natural responses if you take the time to get to know the participant and establish a level of trust upfront. When creating your discussion guide, make sure to put aside at least 5 minutes of informal introductions and casual conversation.
Respond with curiosity
When your participant makes observations and statements about the product you’re testing or the topic you’re discussing, it’s crucial to keep a neutral response. For example, instead of reacting with a comment about their observation, respond with a question. Then, ask follow-up questions to probe for more detail.
Only speak when necessary.
There’s a reason why silence is an interrogation tactic. While interviewing a participant is far from an interrogation, remember that people dislike silence. If you leave space for the participant to speak, they generally will.
Answer questions with questions
You may encounter times when a participant asks you questions in response to the product they’re interacting with. It’s essential to flip these questions back around to gain insights into their experience. For example, if a user asks you, “Is this the right thing to do?” instead of validating or explaining, try asking something like, “What do you think is the right thing to do?”.
How to write a discussion guide
Writing a discussion guide is a critical element of researching. If you walk into a test or interview unprepared, you won’t get the outcomes you’re looking for, and both you and your participant will feel confused and stressed out.
To write a discussion guide, you need to ask the following questions:
- What research method will you be using?
- What is your objective?
- How long do you have?
Once you answer those questions, you’ll be ready to start writing your discussion guide. You can typically break your guide into five main parts:
- The introduction
- Warm-up questions
- Task performance (if applicable)
- Research questions
- Conclusions
These may vary from method to method, but you will generally follow this structure for most of your moderated testing.
Tips for writing good research questions
Knowing what to ask is hard. Knowing how to ask it can be even more challenging. It’s essential to keep these tips in mind when asking questions to get the most out of your research.
Avoid leading questions
Leading questions are arguably one of the more challenging things to avoid when conducting a test. It is, however, one of the most important things to remember. Asking leading questions is a fast way to skew your test results and end up with unreliable data.
A leading question is a question that pushes the participant to respond in a certain way. Leading questions are embedded with assumptions, implications, and coercion.
🚫 Avoid questions like:
“Our app is pretty user-friendly, isn’t it?” “How much did you enjoy that experience?” “Do you always do X?”
✅ Instead, ask:
“How would you rate your experience with this product?” “How would you describe your experience?” “How often do you do X?”
Ask open-ended questions
While not a hard rule, it’s beneficial to try and ask open-ended questions. This will allow your participant to share their thoughts and tell their story.
How to recruit users
All of the world’s research methods and discussion guides won’t get you anywhere without actual participants. While there are various ways to recruit users, you’ll generally fall under two categories (except guerrilla research).
Existing Users
If you have them, existing users can be a great place to start. You can send an email out specifically asking for volunteers to participate. I always recommend incentivizing participation, as you’ll most likely have more success if there’s a reward.
Recruitment Tools
The next and arguably most common method is by using recruitment and research tools. The benefit to using these is that you’ll be able to screen, test, and incentivize all in the same platform. While there are many tools you can use, the most common are:
- Lookback
- UserTesting
- Maze
Synthesizing your data in a way that matters
Last and certainly not least is synthesizing your data. Depending on how many tests you run, you’ll be surprised at the amount of data you’ll collect over time. Synthesizing and presenting your results is key to showing the value of user testing, using the data to inform and validate design decisions, and communicating findings to stakeholders.