CDF-Based Importance Sampling and Visualization for Neural Network Training
dc.contributor.author | Knutsson, Alex | en_US |
dc.contributor.author | Unnebäck, Jakob | en_US |
dc.contributor.author | Jönsson, Daniel | en_US |
dc.contributor.author | Eilertsen, Gabriel | en_US |
dc.contributor.editor | Hansen, Christian | en_US |
dc.contributor.editor | Procter, James | en_US |
dc.contributor.editor | Renata G. Raidou | en_US |
dc.contributor.editor | Jönsson, Daniel | en_US |
dc.contributor.editor | Höllt, Thomas | en_US |
dc.date.accessioned | 2023-09-19T11:31:48Z | |
dc.date.available | 2023-09-19T11:31:48Z | |
dc.date.issued | 2023 | |
dc.description.abstract | Training a deep neural network is computationally expensive, but achieving the same network performance with less computation is possible if the training data is carefully chosen. However, selecting input samples during training is challenging as their true importance for the optimization is unknown. Furthermore, evaluation of the importance of individual samples must be computationally efficient and unbiased. In this paper, we present a new input data importance sampling strategy for reducing the training time of deep neural networks. We investigate different importance metrics that can be efficiently retrieved as they are available during training, i.e., the training loss and gradient norm. We found that choosing only samples with large loss or gradient norm, which are hard for the network to learn, is not optimal for the network performance. Instead, we introduce an importance sampling strategy that selects samples based on the cumulative distribution function of the loss and gradient norm, thereby making it more likely to choose hard samples while still including easy ones. The behavior of the proposed strategy is first analyzed on a synthetic dataset, and then evaluated in the application of classification of malignant cancer in digital pathology image patches. As pathology images contain many repetitive patterns, there could be significant gains in focusing on features that contribute stronger to the optimization. Finally, we show how the importance sampling process can be used to gain insights about the input data through visualization of samples that are found most or least useful for the training. | en_US |
dc.description.sectionheaders | Radiology and Histopathology | |
dc.description.seriesinformation | Eurographics Workshop on Visual Computing for Biology and Medicine | |
dc.identifier.doi | 10.2312/vcbm.20231212 | |
dc.identifier.isbn | 978-3-03868-216-5 | |
dc.identifier.issn | 2070-5786 | |
dc.identifier.pages | 51-55 | |
dc.identifier.pages | 5 pages | |
dc.identifier.uri | https://doi.org/10.2312/vcbm.20231212 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/vcbm20231212 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies -> Neural networks; Human-centered computing -> Visualization techniques | |
dc.subject | Computing methodologies | |
dc.subject | Neural networks | |
dc.subject | Human centered computing | |
dc.subject | Visualization techniques | |
dc.title | CDF-Based Importance Sampling and Visualization for Neural Network Training | en_US |