Browsing by Author "Dolonius, Dan"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Spherical Gaussian Light-field Textures for Fast Precomputed Global Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2020) Currius, Roc Ramon; Dolonius, Dan; Assarsson, Ulf; Sintorn, Erik; Panozzo, Daniele and Assarsson, UlfWe describe a method to use Spherical Gaussians with free directions and arbitrary sharpness and amplitude to approximate the precomputed local light field for any point on a surface in a scene. This allows for a high-quality reconstruction of these light fields in a manner that can be used to render the surfaces with precomputed global illumination in real-time with very low cost both in memory and performance. We also extend this concept to represent the illumination-weighted environment visibility, allowing for high-quality reflections of the distant environment with both surface-material properties and visibility taken into account. We treat obtaining the Spherical Gaussians as an optimization problem for which we train a Convolutional Neural Network to produce appropriate values for each of the Spherical Gaussians' parameters. We define this CNN in such a way that the produced parameters can be interpolated between adjacent local light fields while keeping the illumination in the intermediate points coherentItem UV-free Texturing using Sparse Voxel DAGs(The Eurographics Association and John Wiley & Sons Ltd., 2020) Dolonius, Dan; Sintorn, Erik; Assarsson, Ulf; Panozzo, Daniele and Assarsson, UlfAn application may have to load an unknown 3D model and, for enhanced realistic rendering, precompute values over the surface domain, such as light maps, ambient occlusion, or other global-illumination parameters. High-quality uv-unwrapping has several problems, such as seams, distortions, and wasted texture space. Additionally, procedurally generated scene content, perhaps on the fly, can make manual uv unwrapping impossible. Even when artist manipulation is feasible, good uv layouts can require expertise and be highly labor intensive. This paper investigates how to use Sparse Voxel DAGs (or DAGs for short) as one alternative to avoid uv mapping. The result is an algorithm enabling high compression ratios of both voxel structure and colors, which can be important for a baked scene to fit in GPU memory. Specifically, we enable practical usage for an automatic system by targeting efficient real-time mipmap filtering using compressed textures and adding support for individual mesh voxelizations and resolutions in the same DAG. Furthermore, the latter increases the texture-compression ratios by up to 32% compared to using one global voxelization, DAG compression by 10-15% compared to using a DAG per mesh, and reduces color-bleeding problems for large mipmap filter sizes. The voxel-filtering is more costly than standard hardware 2D-texture filtering. However, for full HD with deferred shading, it is optimized down to 2:5 +/- 0:5 ms for a custom multisampling filtering (e.g., targeted for minification of low-frequency textures) and 5 +/- 2 ms for quad-linear mipmap filtering (e.g., for high-frequency textures). Multiple textures sharing voxelization can amortize the majority of this cost. Hence, these numbers involve 1-3 textures per pixel (Fig. 1c).