Tamp-X: Attacking explainable natural language classifiers through tampered activations
<p>While the technique of Deep Neural Networks (DNNs) has been instrumental in achieving state-of-the-art results for various Natural Language Processing (NLP) tasks, recent works have shown that the decisions made by DNNs cannot always be trusted. Recently Explainable Artificial Intelligence...
Saved in:
| Main Author: | Hassan Ali (3348749) (author) |
|---|---|
| Other Authors: | Muhammad Suleman Khan (17562612) (author), Ala Al-Fuqaha (4434340) (author), Junaid Qadir (16494902) (author) |
| Published: |
2022
|
| Subjects: | |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A review of explainable AI techniques and their evaluation in mammography for breast cancer screening
by: Noora Shifa (21392996)
Published: (2025) -
Exploring the Impact of Explainable Artificial Intelligence on Decision-making in Healthcare
by: MOHAMMAD, AHMAD HASAN
Published: (2023) -
Transforming Dermatopathology With AI: Addressing Bias, Enhancing Interpretability, and Shaping Future Diagnostics
by: Diala Ra'Ed Kamal Kakish (22330627)
Published: (2025) -
Con-Detect: Detecting Adversarially Perturbed Natural Language Inputs to Deep Classifiers Through Holistic Analysis
by: Hassan Ali (3348749)
Published: (2023) -
Defense against adversarial attacks: robust and efficient compressed optimized neural networks
by: Insaf Kraidia (19198012)
Published: (2024)