High-Performance Graphics 2023 - Symposium Papers
Permanent URI for this collection
Browse
Browsing High-Performance Graphics 2023 - Symposium Papers by Subject "Antialiasing"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Minimal Convolutional Neural Networks for Temporal Anti Aliasing(The Eurographics Association, 2023) Herveau, Killian; Piochowiak, Max; Dachsbacher, Carsten; Bikker, Jacco; Gribble, ChristiaanExisting deep learning methods for performing temporal anti aliasing (TAA) in rendering are either closed source or rely on upsampling networks with a large operation count which are expensive to evaluate. We propose a simple deep learning architecture for TAA combining only a few common primitives, easy to assemble and to change for application needs. We use a fully-convolutional neural network architecture with recurrent temporal feedback, motion vectors and depth values as input and show that a simple network can produce satisfactory results. Our architecture template, for which we provide code, introduces a method that adapts to different temporal subpixel offsets for accumulation without increasing the operation count. To this end, convolutional layers cycle through a set of different weights per temporal subpixel offset while their operations remain fixed. We analyze the effect of this method on image quality and present different tradeoffs for adapting the architecture. We show that our simple network performs remarkably better than variance clipping TAA, eliminating both flickering and ghosting without performing upsampling.Item Voxel-based Representations for Improved Filtered Appearance(The Eurographics Association, 2023) Brito, Caio José Dos Santos; Poulin, Pierre; Teichrieb, Veronica; Bikker, Jacco; Gribble, ChristiaanVolumetric representations allow filtering of mesh-based complex 3D scenes to control both the efficiency and quality of rendering. Unfortunately, directional variations in the visual appearance of a volume still hinder its adoption by the real-time rendering community. To alleviate this problem, we propose two simple structures: (1) a virtual mesh to encode the directional distribution of colors and normals, and (2) a low-resolution subgrid of opacities to encode directional visibility. We precompute these structures from a mesh-based scene into a regular voxelization. During display, we use simple rendering methods on the two structures to compute the image contribution of the appearance of a visible voxel, optimizing for efficiency and/or quality. The improved visual results compared to previous work are a step forward to the integration of volumetric representations in real-time rendering.