How the different explanation classes impact trust calibration: The case of clinical decision support systems
<p>Machine learning has made rapid advances in safety-critical applications, such as traffic control, finance, and healthcare. With the criticality of decisions they support and the potential consequences of following their recommendations, it also became critical to provide users with explana...
محفوظ في:
| المؤلف الرئيسي: | Mohammad Naiseh (18513738) (author) |
|---|---|
| مؤلفون آخرون: | Dena Al-Thani (16864245) (author), Nan Jiang (21252) (author), Raian Ali (12066006) (author) |
| منشور في: |
2022
|
| الموضوعات: | |
| الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
مواد مشابهة
-
Explainable recommendation: when design meets trust calibration
حسب: Mohammad Naiseh (18513738)
منشور في: (2021) -
A Data-Driven Decision-Making Framework for Fleet Management in the Government Sector of Dubai
حسب: ALGHANEM, HANI SUBHI MOHD
منشور في: (2024) -
Exploring the Impact of Explainable Artificial Intelligence on Decision-making in Healthcare
حسب: MOHAMMAD, AHMAD HASAN
منشور في: (2023) -
Towards secure and trusted AI in healthcare: A systematic review of emerging innovations and ethical challenges
حسب: Muhammad Mohsin Khan (22303366)
منشور في: (2025) -
E-Doc medical decision support system. (c2005)
حسب: Bogharian, Norair-Sevag K.
منشور في: (2005)