Shortcomings of Interpretability Taxonomies for Deep Neural Networks
Publikation: Bidrag til tidsskrift › Konferenceartikel › Forskning › fagfællebedømt
Dokumenter
- Fulltext
Forlagets udgivne version, 220 KB, PDF-dokument
Taxonomies are vehicles for thinking about what’s possible, for identifying unconsidered options, as well as for establishing formal relations between entities. We identify several shortcomings in 10 existing taxonomies for interpretability methods for explainable artificial intelligence (XAI), focusing on methods for deep neural networks. The shortcomings include redundancies, incompleteness, and inconsistencies. We design a new taxonomy based on two orthogonal dimensions and show how it can be used to derive results about entire classes of interpretability methods for deep neural networks.
Originalsprog | Engelsk |
---|---|
Tidsskrift | CEUR Workshop Proceedings |
Vol/bind | 3318 |
ISSN | 1613-0073 |
Status | Udgivet - 2022 |
Begivenhed | 2022 International Conference on Information and Knowledge Management Workshops, CIKM-WS 2022 - Atlanta, USA Varighed: 17 okt. 2022 → 21 okt. 2022 |
Konference
Konference | 2022 International Conference on Information and Knowledge Management Workshops, CIKM-WS 2022 |
---|---|
Land | USA |
By | Atlanta |
Periode | 17/10/2022 → 21/10/2022 |
Bibliografisk note
Publisher Copyright:
© 2022 Copyright for this paper by its authors.
ID: 336294291