A Bag of Tricks for Efficient Implicit Neural Point Clouds

dc.contributor.authorHahlbohm, Florianen_US
dc.contributor.authorFranke, Linusen_US
dc.contributor.authorOverkämping, Leonen_US
dc.contributor.authorWespe, Paulaen_US
dc.contributor.authorCastillo, Susanaen_US
dc.contributor.authorEisemann, Martinen_US
dc.contributor.authorMagnor, Marcusen_US
dc.contributor.editorEgger, Bernharden_US
dc.contributor.editorGünther, Tobiasen_US
dc.date.accessioned2025-09-24T10:37:00Z
dc.date.available2025-09-24T10:37:00Z
dc.date.issued2025
dc.description.abstractImplicit Neural Point Cloud (INPC) is a recent hybrid representation that combines the expressiveness of neural fields with the efficiency of point-based rendering, achieving state-of-the-art image quality in novel view synthesis. However, as with other high-quality approaches that query neural networks during rendering, the practical usability of INPC is limited by comparatively slow rendering. In this work, we present a collection of optimizations that significantly improve both the training and inference performance of INPC without sacrificing visual fidelity. The most significant modifications are an improved rasterizer implementation, more effective sampling techniques, and the incorporation of pre-training for the convolutional neural network used for hole-filling. Furthermore, we demonstrate that points can be modeled as small Gaussians during inference to further improve quality in extrapolated, e.g., close-up views of the scene. We design our implementations to be broadly applicable beyond INPC and systematically evaluate each modification in a series of experiments. Our optimized INPC pipeline achieves up to 25% faster training, 2× faster rendering, and 20% reduced VRAM usage paired with slight image quality improvements.en_US
dc.description.sectionheadersNeural and Differentiable Rendering
dc.description.seriesinformationVision, Modeling, and Visualization
dc.identifier.doi10.2312/vmv.20251229
dc.identifier.isbn978-3-03868-294-3
dc.identifier.pages8 pages
dc.identifier.urihttps://doi.org/10.2312/vmv.20251229
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/vmv20251229
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies → Image-based rendering; Rasterization; Point-based models
dc.subjectComputing methodologies → Image
dc.subjectbased rendering
dc.subjectRasterization
dc.subjectPoint
dc.subjectbased models
dc.titleA Bag of Tricks for Efficient Implicit Neural Point Cloudsen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
vmv20251229.pdf
Size:
2.59 MB
Format:
Adobe Portable Document Format
Collections