Identifying existing user expectations
Before introducing AI features, teams must understand what mental models users already have. People approach new products with assumptions based on similar experiences. These existing models strongly influence how they interpret AI behavior. Consider a running app that recommends routes. Users familiar with GPS navigation expect consistent directions between two points. When the AI suggests different routes based on weather, time of day, or their fitness level, it violates these expectations. The variability seems like a bug rather than a feature. Research methods help uncover these assumptions:
- Observing users interact with current solutions reveals their step-by-step processes.
- Interviews expose the reasoning behind their actions.
- Card sorting exercises show how they categorize and relate concepts.
Common mismatches include expecting AI to read minds, work perfectly from day one, or never need corrections. Users might assume AI understands context like humans do. They expect a voice assistant to know they're whispering because the baby is sleeping. Identifying these gaps early prevents frustration later.