Building blocks of AI trust
Trust in AI systems rests on 3 fundamental pillars that determine whether users will rely on the technology. Understanding these parts helps create transparency that builds the right level of confidence.
Consider IBM's Watson for Oncology, an AI designed to help doctors treat cancer. Despite analyzing data from 14,000 patients worldwide, major hospitals dropped the program. The failure shows what happens when AI misses any of the 3 trust factors.[1]
- Ability means the AI can do its job well. Watson could analyze complex cancer cases and suggest treatments. But ability alone doesn't create trust.
- Reliability means the AI works consistently. Watson failed here. Danish doctors found they disagreed with its suggestions 2 out of 3 times. When AI gives unpredictable results, users stop trusting it.
- Benevolence means users believe the AI helps them. Watson couldn't explain why it suggested certain treatments. Its algorithms were too complex for doctors to understand. Without clear reasons, doctors couldn't trust it cared about their patients.
These factors depend on each other. Watson's medical knowledge meant nothing without consistent performance. Its analysis failed when doctors couldn't see how it helped patients.
Pro Tip: When introducing AI features, explicitly address all 3 trust factors in your messaging.
References
- People don’t trust AI – here’s how we can change that | The Conversation