<?xml version="1.0" encoding="utf-8"?>

While the idea of outsourcing your entire user research process to AI tools may sound enticing, it's crucial to discern reality from hype. Currently, AI's capabilities have limitations, especially in areas like nuanced data analysis and unbiased insights. Understanding the potential and pitfalls of these tools can help you navigate the evolving landscape of AI-driven user research effectively without unrealistic expectations.

Exercise #1

Question the validity of AI outputs

Question the validity of AI outputs

One of the biggest challenges of using AI in user research analysis is establishing trust in the generated data.[1] How can you be certain that the insights provided by AI are reliable and grounded in actual user research data and not randomly made up?

To validate AI-driven insights in user research, follow a three-step approach:

  • Scrutinize the source data for quality. For instance, if gathering and analyzing feedback on a mobile app's performance, confirm that your collected user comments align with the topic and are recorded accurately.
  • Rerun the analysis using the same parameters to check for consistency. While slight variations are normal, major discrepancies may indicate an issue with the AI tool.
  • Manually analyze a sample set of comments. This hands-on approach allows you to independently assess if the comments align with the insights identified by the AI. If your own analysis corroborates the AI-generated insights, it strengthens confidence in their validity.
Exercise #2

AI can’t detect and understand social nuances of human behavior

AI can’t detect and understand social nuances of human behavior Bad Practice
AI can’t detect and understand social nuances of human behavior Best Practice

AI's inability to grasp the subtleties of human behavior, particularly in social contexts, poses a challenge in user research. For instance, it may struggle to interpret sarcasm, humor, or cultural references in user comments and feedback. This limitation can lead to misinterpretations or incomplete insights, potentially skewing research findings.

To circumvent this, a hybrid approach is effective. Combine AI's efficiency with human expertise by manually cleaning up the source data and rephrasing any ambiguous comments for clarity before you run an AI analysis. For example, consider a comment like this — "Oh, fantastic! Another update that ensures that I spend more time on the app figuring out how to do even the most basic of tasks. Thanks, developers!" An AI tool might categorize this comment as positive due to the presence of words like "fantastic” or "thanks.” However, a human analyst is more likely to understand that the user is expressing frustration and that the overall sentiment is negative. This nuanced understanding will improve the accuracy of the research analysis.

Exercise #3

AI can’t think like your users

AI can’t think like your users Bad Practice
AI can’t think like your users Best Practice

Human behavior is influenced by complex emotions, cultural contexts, and individual experiences that AI struggles to comprehend. Asking AI to emulate user thinking may result in misleading or incomplete insights, potentially leading to misguided design decisions.

Instead, a more effective approach is to directly engage with users through methods like surveys, interviews, and usability testing. This human-centric approach yields authentic insights into their preferences, pain points, and behaviors. For instance, conducting user interviews allows for open-ended discussions, uncovering perspectives that AI might miss. Usability testing provides real-time feedback on product usability, helping to identify specific areas of improvement.

Furthermore, observing user behavior in natural surroundings or using eye-tracking technology provides invaluable data. These methods offer a depth of understanding that AI-driven simulations simply cannot match.

Pro Tip: At the stage of collecting information about your users, use AI to generate suggestions for effective UX research methods that you can use based on your goals.

Exercise #4

AI doesn't understand the context of your research

AI doesn't understand the context of your research Bad Practice
AI doesn't understand the context of your research Best Practice

AI lacks the innate ability to grasp the broader context of a research study. It operates based on patterns and data it has been trained on, without a deeper understanding of the underlying purpose or objectives of the research.

Consider a scenario in user research where a participant provides feedback on a mobile banking app, stating, "The transfer feature is confusing." Human researchers, understanding the context of the study, might follow up with questions and steps to uncover specifics, such as which step in the process was unclear. However, AI, lacking contextual understanding, may interpret this comment at face value, potentially leading to a general recommendation to improve the transfer feature without delving into the specific pain points. So, always have a human in the loop to review and validate AI-generated insights.

Exercise #5

Vague summaries and recommendations

Vague summaries and recommendations Bad Practice
Vague summaries and recommendations Best Practice

Because they lack the contextual understanding that humans possess, AI user research tools can produce vague summaries and recommendations. For instance, when analyzing user feedback on a mobile app, an AI tool might generate a general recommendation like, "Improve user experience." This suggestion lacks specificity, making it challenging for the design team to take actionable steps.

In contrast, a human researcher could provide a more detailed recommendation after understanding the context of the feedback. They might suggest, "Simplify the app's login process to reduce user friction during onboarding."

Exercise #6

AI doesn’t provide reliable, unbiased solutions

AI doesn’t provide reliable, unbiased solutions

AI's limitations in providing reliable, unbiased solutions stem from its lack of inherent human understanding and potential biases in training data. In a design scenario, consider feedback on a new website layout. AI might suggest prioritizing elements based on quantitative data, but it can't grasp the nuanced user preferences or emotions driving those interactions. For instance, AI may favor larger buttons for click-through rates, but human insights might reveal that users value aesthetic balance and minimalistic design.

Additionally, if the training data is skewed towards a specific demographic, the AI might inadvertently favor their preferences, excluding others. This shows that AI can't replace human discernment in design decisions based on a holistic understanding of user needs and emotions and must instead be used alongside it.

Exercise #7

AI doesn’t always provide accurate references

AI doesn’t always provide accurate references

A lot of AI user research tools available today fail to provide citations. For example, they might not say which session or time a user quote or comment came from. This can create problems for researchers who want to use the information because they can't be sure if it's accurate or not.[2] Imagine you’re working on a new app. The tool gives a suggestion to add a specific feature, but it doesn't say where it got that idea. You might end up spending a lot of time and money on something that wasn't based on real user feedback.

What you can do instead is use ChatPDF to turn any article, post, or research paper into a mini knowledge base and get any question answered by the chatbot on the topic. Alternatively, ChatGPT now allows you to upload screenshots or PDFs and extract information directly from them.

Exercise #8

Unstable performance and usability issues

Unstable performance and usability issues

Outages, errors, and unstable performance can significantly hinder AI user research tools. When these tools face outages, they become temporarily unavailable, disrupting the research process and potentially causing data loss. Errors in algorithms or processing can lead to inaccurate insights, misleading researchers, and impacting the quality of findings.

Additionally, unstable performance may lead to inconsistent results, making it challenging to rely on the tool for reliable analyses. This unreliability can delay projects and potentially lead to flawed conclusions.

To mitigate issues related to outages and errors in AI user research tools, have backup methods or tools in place. Regularly monitor performance and maintain a contingency plan to ensure uninterrupted research efforts.

Pro Tip: If possible, avail a free trial or demo before you subscribe to any AI user research tool.

Complete this lesson and move one step closer to your course certificate