Explainable recommendation: when design meets trust calibration

<div><p>Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide expl...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Mohammad Naiseh (18513738) (author)
مؤلفون آخرون: Dena Al-Thani (16864245) (author), Nan Jiang (21252) (author), Raian Ali (12066006) (author)
منشور في: 2021
الموضوعات:
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
_version_ 1864513516653772800
author Mohammad Naiseh (18513738)
author2 Dena Al-Thani (16864245)
Nan Jiang (21252)
Raian Ali (12066006)
author2_role author
author
author
author_facet Mohammad Naiseh (18513738)
Dena Al-Thani (16864245)
Nan Jiang (21252)
Raian Ali (12066006)
author_role author
dc.creator.none.fl_str_mv Mohammad Naiseh (18513738)
Dena Al-Thani (16864245)
Nan Jiang (21252)
Raian Ali (12066006)
dc.date.none.fl_str_mv 2021-08-02T03:00:00Z
dc.identifier.none.fl_str_mv 10.1007/s11280-021-00916-0
dc.relation.none.fl_str_mv https://figshare.com/articles/journal_contribution/Explainable_recommendation_when_design_meets_trust_calibration/25771947
dc.rights.none.fl_str_mv CC BY 4.0
info:eu-repo/semantics/openAccess
dc.subject.none.fl_str_mv Information and computing sciences
Human-centred computing
Explainable AI
Trust
Trust Calibration
User Centric AI
dc.title.none.fl_str_mv Explainable recommendation: when design meets trust calibration
dc.type.none.fl_str_mv Text
Journal contribution
info:eu-repo/semantics/publishedVersion
text
contribution to journal
description <div><p>Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.</p><p> </p></div><h2>Other Information</h2> <p> Published in: World Wide Web<br> License: <a href="https://creativecommons.org/licenses/by/4.0" target="_blank">https://creativecommons.org/licenses/by/4.0</a><br>See article on publisher's website: <a href="https://dx.doi.org/10.1007/s11280-021-00916-0" target="_blank">https://dx.doi.org/10.1007/s11280-021-00916-0</a></p>
eu_rights_str_mv openAccess
id Manara2_bb3698cd24dd012b7cb5a85cd811ad8b
identifier_str_mv 10.1007/s11280-021-00916-0
network_acronym_str Manara2
network_name_str Manara2
oai_identifier_str oai:figshare.com:article/25771947
publishDate 2021
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
rights_invalid_str_mv CC BY 4.0
spelling Explainable recommendation: when design meets trust calibrationMohammad Naiseh (18513738)Dena Al-Thani (16864245)Nan Jiang (21252)Raian Ali (12066006)Information and computing sciencesHuman-centred computingExplainable AITrustTrust CalibrationUser Centric AI<div><p>Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.</p><p> </p></div><h2>Other Information</h2> <p> Published in: World Wide Web<br> License: <a href="https://creativecommons.org/licenses/by/4.0" target="_blank">https://creativecommons.org/licenses/by/4.0</a><br>See article on publisher's website: <a href="https://dx.doi.org/10.1007/s11280-021-00916-0" target="_blank">https://dx.doi.org/10.1007/s11280-021-00916-0</a></p>2021-08-02T03:00:00ZTextJournal contributioninfo:eu-repo/semantics/publishedVersiontextcontribution to journal10.1007/s11280-021-00916-0https://figshare.com/articles/journal_contribution/Explainable_recommendation_when_design_meets_trust_calibration/25771947CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/257719472021-08-02T03:00:00Z
spellingShingle Explainable recommendation: when design meets trust calibration
Mohammad Naiseh (18513738)
Information and computing sciences
Human-centred computing
Explainable AI
Trust
Trust Calibration
User Centric AI
status_str publishedVersion
title Explainable recommendation: when design meets trust calibration
title_full Explainable recommendation: when design meets trust calibration
title_fullStr Explainable recommendation: when design meets trust calibration
title_full_unstemmed Explainable recommendation: when design meets trust calibration
title_short Explainable recommendation: when design meets trust calibration
title_sort Explainable recommendation: when design meets trust calibration
topic Information and computing sciences
Human-centred computing
Explainable AI
Trust
Trust Calibration
User Centric AI