Creating Surveys for UX Research
Learn how to create effective surveys for UX research that enable you to gather valuable user feedback and uncover insights
A survey is a set of questions you use to collect data from your target audience. It is one of many research methods that you can use for user research. This tool is very versatile — it can be used at any project stage and yield a combination of rich insights and structured, consistent data. Another benefit is that surveys can gather data from many participants with relatively low effort.
Surveys can be a quick, easy, and inexpensive way to get the answers to the questions you ask. Put effort into creating your survey — poorly designed surveys won't provide valuable insights.
Surveys are a versatile research tool that can be used at any stage of the design process. They can help gather both qualitative and quantitative data. Remember that they can only provide attitudinal data — data about what people think and how they feel. This may differ from what people actually do.
One of the reasons why surveys are so popular is that they allow you to get a lot of data at little cost. There are many platforms — including SurveyMonkey, Typeform, and Google Forms — where you can host surveys.
Before you create your survey, you need to know what you want to learn from survey participants. Your goals will determine the questions you ask. Knowing your goals will also let you combine surveys with other
Surveys are typically used as an evaluative research method, but they also have a place in generative
A well-constructed questionnaire can be reused repeatedly to make like-for-like comparisons between the responses and your product iterations.
Bias is a tendency to prefer and favor one option over another. It is an inevitable part of our human psyche.
Bias is often built into the language we use. For example, you might ask, "How difficult is this product to use?" without a second thought. However, this phrasing subtly pushes the reader towards the idea of difficulty and is considered a leading question. Some other things that can create bias are question and answer order, survey sampling, and unbalanced scales.
As
Asking leading questions is the most evident practice of confirmation bias. These questions lead people to a certain answer. Compare these two questions:
- "Our previous feedback survey showed that most people prefer breakfast as their favorite meal. Do you agree?"
- "If you had to choose just one, which meal do you prefer: breakfast, lunch, or dinner?"
The first question introduces an assumptive statement and then asks the respondent for feedback. As a result, respondents are more likely to agree with the statement.
Leading questions tends to collect bad data that can skew your
Sampling bias is an error related to selecting the survey respondents. It happens when a survey sample is not completely random.
If certain types of survey takers are more or less likely to participate in your
To avoid sampling bias, make sure your survey is distributed in a way that all types of respondents get a chance to respond to it. This might mean using several distribution channels and collection methods. Look for ways to encourage diversity in the sample by asking yourself, “Who haven’t we talked to yet?” Also, be careful of the conclusions drawn from any one study.[2]
The order of both questions and answers could cause your survey respondents to provide biased answers.
In some cases, the initial questions of your survey could influence the answers to later questions. For example, if your first question is about how satisfied customers are with a highly successful feature, and your second question is about their overall satisfaction, responders are more likely to give positive answers.
As for answers, in online and print surveys, respondents usually select from the first few answer options. During phone surveys, respondents are more likely to opt for some of the latter options.
A way to avoid order bias is to randomize the order. For example, you can group related questions into blocks, mix up these blocks, and randomize the order of answers — that's what we do here at Uxcel.
Like confirmation bias, unbalanced scales will sway answers by limiting users' choices. This type of survey question offers an unequal amount of positive selections and negative selections to choose from. Therefore, the scale is weighted and biased toward one direction.
For example, "How much did you enjoy your experience on a scale of 1 (enjoyed it a little) to 5 (enjoyed it a lot)?" doesn't have a negative option. Another example is a question with a scale where the midpoint isn't neutral: "How would you rate your most recent experience at our restaurant?" with options “Great,” “Very good,” “Good,” “Okay,” and “Poor.”
A way to combat this problem is to provide the respondents with the extremes using a point scale (e.g., 1 to 5). Labeling the anchor 5 as high and 1 as low allows respondents to understand that 3 is the mid-point.[3]
To get the user insights you need, focus your
A good strategy is to ask closed-ended questions early on, then follow up with open-ended questions that explore the subject deeper.[4]
For example, let's say you want to find out how users feel about a new tool or feature.
Some closed-ended questions you could ask are:
- Have you used the new feature?
- How easy or difficult was it to use on a scale of 1 (very difficult) to 5 (very easy)?
And you could follow up with open-ended questions like:
- What are your first impressions of this feature?
- What is one thing you would change about this feature?
Make sure each question focuses on only one concept. Users can have different impressions about different aspects of your product; mixing them up won't give accurate results.
For example, a bad question would be, "How would you rate the usability and design of our app?"
To get accurate findings, split this question into 2 different questions — one about usability and the other about design.
Closed-ended questions are narrow in focus and usually answered with a single word or from a small selection of options. For example, "Are you satisfied with this product?" — Yes/No/Mostly/Not quite.
Closed-ended questions give limited insight but can easily be analyzed for quantitative data. For example, one of the most popular closed questions in marketing is the Net Promoter Score (NPS) question, which asks people, "How likely are you to recommend this product/service on a scale from 0 to 10?" and uses numerical answers to calculate overall score trends.
Closed questions work best when used early in the survey, with more open-ended questions following.
Open-ended questions are broad and can be answered in detail. For example, "What do you think about this product?" Open-ended questions help you see things from a customer's perspective as you get feedback in their own words instead of stock answers.
Data from these questions is usually qualitative, although themes may arise that provide quantitative data too — for example, by using a semantic analysis tool to bring out the recurring themes. Other ways to analyze open-ended questions are using spreadsheets, viewing qualitative trends, and spotting elements that stand out with word cloud visualizations.
Conventionally, the longer a survey takes to complete, the less likely it's to be completed. It's not only about the number of questions but the overall time and time perception.
To increase the engagement of your surveys:
- Ask only relevant questions
- Limit the number of questions
- Be mindful of how much time it takes to complete the survey
- Allow respondents to skip questions they don't want to answer
- Keep participants informed about how long it would take to complete a survey and the number of questions left
Incentivizing users to take a survey is generally a good way to increase the number of responses. However, there are a couple of caveats.
Offering the wrong incentives could deter some users. For example, if a user isn't happy with your product, offering a free month may not incentivize them to take the survey. Another concern is introducing bias into your survey — responders might feel the need to answer positively due to the assured incentive.
Here are some best practices for providing incentives:
- Separate cohorts of users into tiers. You may have a tier for general users, where you're just looking for demographic or behavior information. Then more specific tiers for users with specialized knowledge in a specific area.
- Provide financial incentives for each tier. Perhaps it's more valuable for you to dig deeper into a specific cohort of users and provide a greater financial incentive to them. Many companies will offer much higher incentives to secure specific user participants.
- Offer compensation in a variety of ways. Direct payments via Paypal or gift cards at places like Amazon are generally popular.