Learning Neural Antiderivatives

Abstract
Neural fields offer continuous, learnable representations that extend beyond traditional discrete formats in visual computing. We study the problem of learning neural representations of repeated antiderivatives directly from a function, a continuous analogue of summed-area tables. Although widely used in discrete domains, such cumulative schemes rely on grids, which prevents their applicability in continuous neural contexts. We introduce and analyze a range of neural methods for repeated integration, including both adaptations of prior work and novel designs. Our evaluation spans multiple input dimensionalities and integration orders, assessing both reconstruction quality and performance in downstream tasks such as filtering and rendering. These results enable integrating classical cumulative operators into modern neural systems and offer insights into learning tasks involving differential and integral operators.
Description

CCS Concepts: Computing methodologies → Machine learning algorithms; Image manipulation; Rendering

        
@inproceedings{
10.2312:vmv.20251230
, booktitle = {
Vision, Modeling, and Visualization
}, editor = {
Egger, Bernhard
and
Günther, Tobias
}, title = {{
Learning Neural Antiderivatives
}}, author = {
Rubab, Fizza
and
Nsampi, Ntumba Elie
and
Balint, Martin
and
Mujkanovic, Felix
and
Seidel, Hans-Peter
and
Ritschel, Tobias
and
Leimkühler, Thomas
}, year = {
2025
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-294-3
}, DOI = {
10.2312/vmv.20251230
} }
Citation
Collections