Research techniques for uncovering mental models
Specialized research techniques help uncover the mental models users bring to AI interactions. These approaches reveal not just what users do, but how they conceptualize the system's operation:
- Think-aloud protocols. Ask participants to verbalize their understanding of how the system works while completing tasks. Prompt users to explain why the system produced specific outputs and what information it might be using to reveal misconceptions not observable through behavior alone.[1]
- Mental model drawing exercises. Invite users to sketch or visualize how they believe the AI system works. Start with unstructured doodling where users freely represent their understanding, then move to structured mapping using standardized symbols for data flows, decisions, and interactions.[2]
- Expectation testing. Present users with hypothetical scenarios to probe their predictions about AI behavior. Ask "What do you think would happen if..." questions about unusual inputs to reveal assumptions about system boundaries.
- Spontaneous analogies. Listen for metaphors that users generate when describing AI systems. These provide rich insights into their underlying mental frameworks without direct questioning.
Pro Tip: During sketching exercises, encourage users to "think broadly" about connections between inputs and outputs, and reassure them there's no wrong way to represent their understanding.