TY - GEN
T1 - Explainable Recommendations in Intelligent Systems
T2 - 14th International Conference on Research Challenges in Information Sciences, RCIS 2020
AU - Naiseh, Mohammad
AU - Jiang, Nan
AU - Ma, Jianbing
AU - Ali, Raian
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - With the increase in data volume, velocity and types, intelligent human-agent systems have become popular and adopted in different application domains, including critical and sensitive areas such as health and security. Humans’ trust, their consent and receptiveness to recommendations are the main requirement for the success of such services. Recently, the demand on explaining the recommendations to humans has increased both from humans interacting with these systems so that they make an informed decision and, also, owners and systems managers to increase transparency and consequently trust and users’ retention. Existing systematic reviews in the area of explainable recommendations focused on the goal of providing explanations, their presentation and informational content. In this paper, we review the literature with a focus on two user experience facets of explanations; delivery methods and modalities. We then focus on the risks of explanation both on user experience and their decision making. Our review revealed that explanations delivery to end-users is mostly designed to be along with the recommendation in a push and pull styles while archiving explanations for later accountability and traceability is still limited. We also found that the emphasis was mainly on the benefits of recommendations while risks and potential concerns, such as over-reliance on machines, is still a new area to explore.
AB - With the increase in data volume, velocity and types, intelligent human-agent systems have become popular and adopted in different application domains, including critical and sensitive areas such as health and security. Humans’ trust, their consent and receptiveness to recommendations are the main requirement for the success of such services. Recently, the demand on explaining the recommendations to humans has increased both from humans interacting with these systems so that they make an informed decision and, also, owners and systems managers to increase transparency and consequently trust and users’ retention. Existing systematic reviews in the area of explainable recommendations focused on the goal of providing explanations, their presentation and informational content. In this paper, we review the literature with a focus on two user experience facets of explanations; delivery methods and modalities. We then focus on the risks of explanation both on user experience and their decision making. Our review revealed that explanations delivery to end-users is mostly designed to be along with the recommendation in a push and pull styles while archiving explanations for later accountability and traceability is still limited. We also found that the emphasis was mainly on the benefits of recommendations while risks and potential concerns, such as over-reliance on machines, is still a new area to explore.
KW - Explainable artificial intelligence
KW - Explainable recommendations
KW - Human factors in information systems
KW - User-centred design
UR - https://www.scopus.com/pages/publications/85087765799
U2 - 10.1007/978-3-030-50316-1_13
DO - 10.1007/978-3-030-50316-1_13
M3 - Conference contribution
AN - SCOPUS:85087765799
SN - 9783030503154
T3 - Lecture Notes in Business Information Processing
SP - 212
EP - 228
BT - Research Challenges in Information Science - 14th International Conference, RCIS 2020, Proceedings
A2 - Dalpiaz, Fabiano
A2 - Zdravkovic, Jelena
A2 - Loucopoulos, Pericles
PB - Springer
Y2 - 23 September 2020 through 25 September 2020
ER -