<?xml version="1.0" encoding="utf-8"?>

A survey is a set of questions you use to collect data from your target audience. It is one of many research methods that you can use for user research. This tool is very versatile — it can be used at any project stage and yield a combination of rich insights and structured, consistent data. Another benefit is that surveys can gather data from many participants with relatively low effort.

Surveys can be a quick, easy, and inexpensive way to get the answers to the questions you ask. Put effort into creating your survey — poorly designed surveys won't provide valuable insights.

Exercise #1

Why surveys are useful

Why surveys are useful Bad Practice
Why surveys are useful Best Practice

Surveys are a versatile research tool that can be used at any stage of the design process. They can help gather both qualitative and quantitative data. Remember that they can only provide attitudinal data — data about what people think and how they feel. This may differ from what people actually do.

One of the reasons why surveys are so popular is that they allow you to get a lot of data at little cost. There are many platforms — including SurveyMonkey, Typeform, and Google Forms — where you can host surveys.

Exercise #2

Understand your goals

Understand your goals

Before you create your survey, you need to know what you want to learn from survey participants. Your goals will determine the questions you ask. Knowing your goals will also let you combine surveys with other research methods for more accurate results.

Surveys are typically used as an evaluative research method, but they also have a place in generative research and as a continuous research method. UX survey data supports and complements website analytics and UX metrics collected through A/B testing, heatmaps, usability testing, and session recordings.[1]

A well-constructed questionnaire can be reused repeatedly to make like-for-like comparisons between the responses and your product iterations.

Exercise #3

Avoid bias

Avoid bias

Bias is a tendency to prefer and favor one option over another. It is an inevitable part of our human psyche.

Bias is often built into the language we use. For example, you might ask, "How difficult is this product to use?" without a second thought. However, this phrasing subtly pushes the reader towards the idea of difficulty and is considered a leading question. Some other things that can create bias are question and answer order, survey sampling, and unbalanced scales.

As UX researchers, our goal is to minimize bias as much as possible as we try to get maximally objective information.

Exercise #4

Avoid leading questions

Avoid leading questions Bad Practice
Avoid leading questions Best Practice

Asking leading questions is the most evident practice of confirmation bias. These questions lead people to a certain answer. Compare these two questions:

  1. "Our previous feedback survey showed that most people prefer breakfast as their favorite meal. Do you agree?"
  2. "If you had to choose just one, which meal do you prefer: breakfast, lunch, or dinner?"

The first question introduces an assumptive statement and then asks the respondent for feedback. As a result, respondents are more likely to agree with the statement.

Leading questions tends to collect bad data that can skew your research findings and lead you to incorrect conclusions.

Exercise #5

Sampling bias

Sampling bias Bad Practice
Sampling bias Best Practice

Sampling bias is an error related to selecting the survey respondents. It happens when a survey sample is not completely random.

If certain types of survey takers are more or less likely to participate in your research, you might be introducing sample selection bias into your research. For example, creating an online poll with the question "Do you have internet access?" will lead to almost 100% of participants answering "Yes."

To avoid sampling bias, make sure your survey is distributed in a way that all types of respondents get a chance to respond to it. This might mean using several distribution channels and collection methods. Look for ways to encourage diversity in the sample by asking yourself, “Who haven’t we talked to yet?” Also, be careful of the conclusions drawn from any one study.[2]

Exercise #6

Question order bias

Question order bias Bad Practice
Question order bias Best Practice

The order of both questions and answers could cause your survey respondents to provide biased answers.

In some cases, the initial questions of your survey could influence the answers to later questions. For example, if your first question is about how satisfied customers are with a highly successful feature, and your second question is about their overall satisfaction, responders are more likely to give positive answers.

As for answers, in online and print surveys, respondents usually select from the first few answer options. During phone surveys, respondents are more likely to opt for some of the latter options.

A way to avoid order bias is to randomize the order. For example, you can group related questions into blocks, mix up these blocks, and randomize the order of answers — that's what we do here at Uxcel.

Exercise #7

Unbalanced survey scales

Unbalanced survey scales Bad Practice
Unbalanced survey scales Best Practice

Like confirmation bias, unbalanced scales will sway answers by limiting users' choices. This type of survey question offers an unequal amount of positive selections and negative selections to choose from. Therefore, the scale is weighted and biased toward one direction.

For example, "How much did you enjoy your experience on a scale of 1 (enjoyed it a little) to 5 (enjoyed it a lot)?" doesn't have a negative option. Another example is a question with a scale where the midpoint isn't neutral: "How would you rate your most recent experience at our restaurant?" with options “Great,” “Very good,” “Good,” “Okay,” and “Poor.”

A way to combat this problem is to provide the respondents with the extremes using a point scale (e.g., 1 to 5). Labeling the anchor 5 as high and 1 as low allows respondents to understand that 3 is the mid-point.[3]

Exercise #8

Figuring out the right questions to ask

Figuring out the right questions to ask Bad Practice
Figuring out the right questions to ask Best Practice

To get the user insights you need, focus your UX survey questions on the problem you're trying to solve. If you don't know what the problem is yet, ask questions to identify users' pain points. This will help you discover the blockers they're experiencing.

A good strategy is to ask closed-ended questions early on, then follow up with open-ended questions that explore the subject deeper.[4]

For example, let's say you want to find out how users feel about a new tool or feature.

Some closed-ended questions you could ask are:

  • Have you used the new feature?
  • How easy or difficult was it to use on a scale of 1 (very difficult) to 5 (very easy)?

And you could follow up with open-ended questions like:

  • What are your first impressions of this feature?
  • What is one thing you would change about this feature?
Exercise #9

Ask one question at a time

Ask one question at a time Bad Practice
Ask one question at a time Best Practice

Make sure each question focuses on only one concept. Users can have different impressions about different aspects of your product; mixing them up won't give accurate results.

For example, a bad question would be, "How would you rate the usability and design of our app?"

To get accurate findings, split this question into 2 different questions — one about usability and the other about design.

Exercise #10

Closed questions for quantitative data

Closed questions for quantitative data Bad Practice
Closed questions for quantitative data Best Practice

Closed-ended questions are narrow in focus and usually answered with a single word or from a small selection of options. For example, "Are you satisfied with this product?" — Yes/No/Mostly/Not quite.

Closed-ended questions give limited insight but can easily be analyzed for quantitative data. For example, one of the most popular closed questions in marketing is the Net Promoter Score (NPS) question, which asks people, "How likely are you to recommend this product/service on a scale from 0 to 10?" and uses numerical answers to calculate overall score trends.

Closed questions work best when used early in the survey, with more open-ended questions following.

Exercise #11

Open questions for qualitative data

Open questions for qualitative data Bad Practice
Open questions for qualitative data Best Practice

Open-ended questions are broad and can be answered in detail. For example, "What do you think about this product?" Open-ended questions help you see things from a customer's perspective as you get feedback in their own words instead of stock answers.

Data from these questions is usually qualitative, although themes may arise that provide quantitative data too — for example, by using a semantic analysis tool to bring out the recurring themes. Other ways to analyze open-ended questions are using spreadsheets, viewing qualitative trends, and spotting elements that stand out with word cloud visualizations.

Exercise #12

Survey length

Survey length Bad Practice
Survey length Best Practice

Conventionally, the longer a survey takes to complete, the less likely it's to be completed. It's not only about the number of questions but the overall time and time perception.

To increase the engagement of your surveys:

  • Ask only relevant questions
  • Limit the number of questions
  • Be mindful of how much time it takes to complete the survey
  • Allow respondents to skip questions they don't want to answer
  • Keep participants informed about how long it would take to complete a survey and the number of questions left
Exercise #13

Providing incentives

Providing incentives

Incentivizing users to take a survey is generally a good way to increase the number of responses. However, there are a couple of caveats.

Offering the wrong incentives could deter some users. For example, if a user isn't happy with your product, offering a free month may not incentivize them to take the survey. Another concern is introducing bias into your survey — responders might feel the need to answer positively due to the assured incentive.

Here are some best practices for providing incentives:

  • Separate cohorts of users into tiers. You may have a tier for general users, where you're just looking for demographic or behavior information. Then more specific tiers for users with specialized knowledge in a specific area.
  • Provide financial incentives for each tier. Perhaps it's more valuable for you to dig deeper into a specific cohort of users and provide a greater financial incentive to them. Many companies will offer much higher incentives to secure specific user participants.
  • Offer compensation in a variety of ways. Direct payments via Paypal or gift cards at places like Amazon are generally popular.
Complete this lesson and move one step closer to your course certificate