Explainable recommendation: when design meets trust calibration
<div><p>Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide expl...
محفوظ في:
| المؤلف الرئيسي: | Mohammad Naiseh (18513738) (author) |
|---|---|
| مؤلفون آخرون: | Dena Al-Thani (16864245) (author), Nan Jiang (21252) (author), Raian Ali (12066006) (author) |
| منشور في: |
2021
|
| الموضوعات: | |
| الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
مواد مشابهة
-
How the different explanation classes impact trust calibration: The case of clinical decision support systems
حسب: Mohammad Naiseh (18513738)
منشور في: (2022) -
Towards secure and trusted AI in healthcare: A systematic review of emerging innovations and ethical challenges
حسب: Muhammad Mohsin Khan (22303366)
منشور في: (2025) -
Novel interpretable and robust web-based AI platform for phishing email detection
حسب: Abdulla Al-Subaiey (19757007)
منشور في: (2024) -
Trust matters: A global perspective on the influence of trust on bank market risk
حسب: Omneya Abdelsalam (19325635)
منشور في: (2024) -
Exploring the Impact of Explainable Artificial Intelligence on Decision-making in Healthcare
حسب: MOHAMMAD, AHMAD HASAN
منشور في: (2023)