Abstract
The increased adoption of collaborative human-artificial intelligence decision-making tools triggered a need to explain recommendations for safe and effective collaboration. We explore how users interact with explanations and why trust-calibration errors occur, taking clinical decision-support systems as a case study.
| Original language | English |
|---|---|
| Pages | 28-37 |
| Number of pages | 10 |
| Volume | 54 |
| No. | 10 |
| Specialist publication | Computer |
| DOIs | |
| Publication status | Published - Oct 2021 |