Machine Learning Methods in Visualisation for Big Data 2021
Permanent URI for this collection
Browse
Browsing Machine Learning Methods in Visualisation for Big Data 2021 by Issue Date
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Revealing Multimodality in Ensemble Weather Prediction(The Eurographics Association, 2021) Galmiche, Natacha; Hauser, Helwig; Spengler, Thomas; Spensberger, Clemens; Brun, Morten; Blaser, Nello; Archambault, Daniel and Nabney, Ian and Peltonen, JaakkoEnsemble methods are widely used to simulate complex non-linear systems and to estimate forecast uncertainty. However, visualizing and analyzing ensemble data is challenging, in particular when multimodality arises, i.e., distinct likely outcomes. We propose a graph-based approach that explores multimodality in univariate ensemble data from weather prediction. Our solution utilizes clustering and a novel concept of life span associated with each cluster. We applied our method to historical predictions of extreme weather events and illustrate that our method aids the understanding of the respective ensemble forecasts.Item MLVis 2021: Frontmatter(The Eurographics Association, 2021) Archambault, Daniel; Nabney, Ian; Peltonen, Jaakko; Archambault, Daniel and Nabney, Ian and Peltonen, JaakkoItem Controllably Sparse Perturbations of Robust Classifiers for Explaining Predictions and Probing Learned Concepts(The Eurographics Association, 2021) Roberts, Jay; Tsiligkaridis, Theodoros; Archambault, Daniel and Nabney, Ian and Peltonen, JaakkoExplaining the predictions of a deep neural network (DNN) in image classification is an active area of research. Many methods focus on localizing pixels, or groups of pixels, which maximize a relevance metric for the prediction. Others aim at creating local "proxy" explainers which aim to account for an individual prediction of a model. We aim to explore "why" a model made a prediction by perturbing inputs to robust classifiers and interpreting the semantically meaningful results. For such an explanation to be useful for humans it is desirable for it to be sparse; however, generating sparse perturbations can computationally expensive and infeasible on high resolution data. Here we introduce controllably sparse explanations that can be efficiently generated on higher resolution data to provide improved counter-factual explanations. Further we use these controllably sparse explanations to probe what the robust classifier has learned. These explanations could provide insight for model developers as well as assist in detecting dataset bias.