<?xml version="1.0" encoding="utf-8"?>

Every AI system has distinct strengths and surprising blind spots that directly impact user experience. Today's AI excels at recognizing complex patterns in images, text, and behavior, making it powerful for tasks like content recommendations or speech recognition. Yet these same systems can produce baffling errors or "hallucinations" when encountering ambiguous inputs or scenarios outside their training data.

Today's AI can compose music, create artwork, translate between languages, and recommend content aligned with user preferences. However, these systems still struggle with genuine understanding, common sense reasoning, and adapting to novel situations. They can produce confident but entirely incorrect responses when operating outside their training parameters.

An understanding of current AI technology capabilities helps create more resilient and effective AI-powered user experiences.

Exercise #1

Recognition vs. reasoning abilities

Human supervision in AI remains crucial, especially when it comes to the gap between recognition and reasoning. Many AI systems are excellent at recognition tasks. They can detect patterns, generate fluent text, and mimic structure based on massive datasets. But recognition is not reasoning.

Reasoning involves understanding context, making inferences, and navigating ambiguity. Recent tools like Google Gemini 2.5 show early signs of simulating reasoning through multimodal inputs. For example, it can suggest what to wear based on weather data. Still, these are approximations, not true comprehension.

Human input is vital for quality control, ensuring outputs align with goals and standards. Humans bring cultural awareness, ethical judgment, and creative insight that AI still lacks. Supervision also allows content to reflect specific brand voices and adapt to nuanced use cases.

Ultimately, human oversight bridges the gap between artificial pattern-matching and meaningful, responsible outcomes.

Exercise #2

AI's pattern detection strengths

Pattern detection forms the foundation of AI's most impressive capabilities. Modern AI systems excel at discovering recurring structures in vast datasets that humans might miss entirely. This ability enables AI to recognize objects in images, identify sentiment in text, detect financial fraud, and predict equipment failures before they occur. The pattern recognition powers work across data types, from visual and audio information to user behavior sequences and numerical trends. For designers, this capability offers unprecedented opportunities to personalize interfaces, prioritize content, and anticipate user needs, for example:

  • An e-commerce site can notice subtle purchasing patterns to recommend relevant products.
  • A TV streaming platform can show different movie thumbnails to different users based on what images they've clicked on before.
  • A productivity tool can recognize task completion patterns to suggest workflow improvements.

Understanding these pattern recognition strengths allows designers to create more adaptive, responsive interfaces that feel remarkably attuned to user behavior.

Exercise #3

Contextual understanding

Modern AI systems have evolved beyond simple prediction to incorporate increasingly sophisticated contextual understanding. This capability represents a significant advancement in how systems interpret and respond to user needs.

Google Maps doesn't just calculate the fastest route but considers real-time traffic conditions, your typical driving speed, and even frequent destinations when suggesting navigation options. Weather apps like AccuWeather go beyond basic forecasts by combining weather data with your location, past activities, and calendar events to recommend whether to bring an umbrella or reschedule outdoor plans.[1]

However, this contextual understanding has clear boundaries. AI systems construct approximations of context through correlation rather than causation. They recognize statistical patterns across multiple data sources without truly "understanding" underlying relationships. For designers, this means carefully selecting which contextual factors genuinely enhance user experience while creating transparent indicators when systems operate with recognized limitations.

Exercise #4

Generative AI capabilities and limitations

Generative AI capabilities and limitations

Generative AI is a powerful part of modern artificial intelligence. Unlike tools that only analyze existing data, generative models can create completely new content, for example:

  • Large Language Models (LLMs) like ChatGPT and Claude can write text that sounds human
  • Tools like MidJourney and DALL·E can generate realistic images
  • Apps such as DeepBrain can produce video content
  • Speech generators can mimic human voices with surprising accuracy

These systems work by recognizing patterns in huge amounts of data and mixing them in new ways. This makes them useful for creative tasks like designing visuals, writing copy, or developing prototypes.

But generative AI also has clear limits. It doesn’t really understand the content it creates. Without tools like Retrieval-Augmented Generation (RAG), it can make things up, producing answers that sound right but aren’t true. It can also lose consistency in longer texts or repeat patterns instead of generating truly original ideas.

Image generators, for example, often struggle with small but telling details. A common giveaway is how they draw hands. AI might add too many fingers or twist them in unnatural ways. Spotting these errors helps people recognize when an image was made by a machine.[2]

Exercise #5

Data bias: origins and impacts

Data bias is one of AI's biggest limitations, coming from several sources with serious effects on user experience. Bias enters AI systems mainly through training data that doesn't properly represent certain groups or contains historical prejudices:

  • Selection bias happens when data collection methods leave out certain populations, like a speech recognition system trained mostly on native English speakers that struggles with accents.
  • Confirmation bias occurs when systems are tuned toward expected outcomes, such as a hiring algorithm that favors candidates resembling previously successful employees.
  • Measurement bias emerges when the metrics used don't truly reflect real-world goals, like a content recommendation system optimized for clicks rather than user satisfaction.

Addressing data bias requires deliberate effort from everyone involved in AI development and implementation.

Product managers need to prioritize fairness in requirements, data scientists must carefully evaluate training datasets for representation gaps, developers should implement diverse testing methods, and designers need to create feedback systems that capture different perspectives. Together, these professionals must build clear indicators when systems operate with known limitations or uncertainties.

Pro Tip: Create diverse testing scenarios explicitly designed to uncover potential bias in AI features before launching to users.

Exercise #6

Unpredictability and edge cases

AI unpredictability stems largely from how modern systems handle edge cases, situations falling outside their training distribution. Unlike traditional software with explicit rules, AI systems learn patterns from examples, creating implicit rather than explicit logic. This approach excels with common scenarios but produces unpredictable results when encountering unfamiliar inputs.

A voice assistant trained primarily on standard accents may fail unpredictably with regional dialects. A content moderation system might misclassify harmless but unusual expressions, for instance, flagging cultural idioms as inappropriate, removing artistic nudity as explicit content, or blocking health-related discussions as violations of community standards simply because these patterns weren't well-represented in training data.

This unpredictability poses unique challenges for professionals accustomed to deterministic systems. Edge cases that might affect only a tiny percentage of users in traditional software can trigger spectacular AI failures affecting entire user segments. Addressing this requires robust testing across diverse scenarios, monitoring systems for drift from expected behavior, implementing graceful fallbacks when confidence is low, and designing transparent communication about system limitations.

Exercise #7

Overfitting: when models learn too well

While overfitting allows models to perform very well on training data, their performance on new or real-world data tends to drop sharply. This is because the model learns to memorize training inputs instead of learning patterns that generalize. Imagine an AI trained to distinguish healthy trees from sick ones. It might create an overly complex pattern that perfectly separates training examples but fails with new trees in slightly different positions. A recommendation engine might memorize existing users' preferences without capturing the underlying principles that would help it serve newcomers.

Data scientists can address this through technical solutions like regularization and cross-validation, while product managers should set realistic expectations about system capabilities. Developers need to build mechanisms to detect when the AI operates outside its comfort zone, and designers should create interfaces that acknowledge uncertainty rather than projecting false confidence. The key is creating systems that generalize well, finding the sweet spot between underfitting (too simple to capture patterns) and overfitting (too complex, memorizing rather than learning).[3]

Pro Tip: Include fallback options and clear confidence indicators in AI interfaces to handle potential overfitting gracefully.

Exercise #8

Designing for graceful AI failure

Designing for graceful AI failure

AI systems regularly encounter scenarios exceeding their capabilities, unlike traditional interfaces, where failures occur rarely and predictably. Smart design anticipates these moments and creates thoughtful fallback experiences. Consider a language translation app that encounters medical terminology it doesn't recognize. Rather than providing an incorrect translation that could have serious consequences, it should clearly indicate its uncertainty and suggest alternatives, like consulting a specialist.

Product teams should implement confidence thresholds that trigger different paths when uncertainty is high:

  • Designers can incorporate visual indicators like confidence meters.
  • Developers build in detection mechanisms for edge cases.
  • Product managers should prioritize transparent communication about limitations rather than overpromising capabilities.

When an AI system isn't sure about something, it should clearly demonstrate it. Gradually introducing AI features with progressive disclosure helps manage user expectations. Most importantly, users should always have ways to correct the system, provide feedback, and reach human help when needed. By planning for failure, teams can actually build more trust with users by showing the system knows its own limits.

Complete this lesson and move one step closer to your course certificate