Exploring Classifiers with Differentiable Decision Boundary Maps
dc.contributor.author | Machado, Alister | en_US |
dc.contributor.author | Behrisch, Michael | en_US |
dc.contributor.author | Telea, Alexandru | en_US |
dc.contributor.editor | Aigner, Wolfgang | en_US |
dc.contributor.editor | Archambault, Daniel | en_US |
dc.contributor.editor | Bujack, Roxana | en_US |
dc.date.accessioned | 2024-05-21T08:19:54Z | |
dc.date.available | 2024-05-21T08:19:54Z | |
dc.date.issued | 2024 | |
dc.description.abstract | Explaining Machine Learning (ML) - and especially Deep Learning (DL) - classifiers' decisions is a subject of interest across fields due to the increasing ubiquity of such models in computing systems. As models get increasingly complex, relying on sophisticated machinery to recognize data patterns, explaining their behavior becomes more difficult. Directly visualizing classifier behavior is in general infeasible, as they create partitions of the data space, which is typically high dimensional. In recent years, Decision Boundary Maps (DBMs) have been developed, taking advantage of projection and inverse projection techniques. By being able to map 2D points back to the data space and subsequently run a classifier, DBMs represent a slice of classifier outputs. However, we recognize that DBMs without additional explanatory views are limited in their applicability. In this work, we propose augmenting the naive DBM generating process with views that provide more in-depth information about classifier behavior, such as whether the training procedure is locally stable. We describe our proposed views - which we term Differentiable Decision Boundary Maps - over a running example, explaining how our work enables drawing new and useful conclusions from these dense maps. We further demonstrate the value of these conclusions by showing how useful they would be in carrying out or preventing a dataset poisoning attack. We thus provide evidence of the ability of our proposed views to make DBMs significantly more trustworthy and interpretable, increasing their utility as a model understanding tool. | en_US |
dc.description.number | 3 | |
dc.description.sectionheaders | Honorable Mention | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 43 | |
dc.identifier.doi | 10.1111/cgf.15109 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 12 pages | |
dc.identifier.uri | https://doi.org/10.1111/cgf.15109 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.1111/cgf15109 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Human-centered computing->Visualization techniques; Computing methodologies->Machine learning; Mathematics of computing->Dimensionality reduction | |
dc.subject | Human centered computing | |
dc.subject | Visualization techniques | |
dc.subject | Computing methodologies | |
dc.subject | Machine learning | |
dc.subject | Mathematics of computing | |
dc.subject | Dimensionality reduction | |
dc.title | Exploring Classifiers with Differentiable Decision Boundary Maps | en_US |