Browsing by Author "Belcour, Laurent"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item A Data-Driven Paradigm for Precomputed Radiance Transfer(ACM Association for Computing Machinery, 2022) Belcour, Laurent; Deliot, Thomas; Barbier, Wilhem; Soler, Cyril; Josef Spjut; Marc Stamminger; Victor ZordanIn this work, we explore a change of paradigm to build Precomputed Radiance Transfer (PRT) methods in a data-driven way. This paradigm shift allows us to alleviate the difficulties of building traditional PRT methods such as defining a reconstruction basis, coding a dedicated path tracer to compute a transfer function, etc. Our objective is to pave the way for Machine Learned methods by providing a simple baseline algorithm. More specifically, we demonstrate real-time rendering of indirect illumination in hair and surfaces from a few measurements of direct lighting.We build our baseline from pairs of direct and indirect illumination renderings using only standard tools such as Singular Value Decomposition (SVD) to extract both the reconstruction basis and transfer function.Item Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames(The Eurographics Association and John Wiley & Sons Ltd., 2019) Heitz, Eric; Belcour, Laurent; Boubekeur, Tamy and Sen, PradeepRecent work has shown that distributing Monte Carlo errors as a blue noise in screen space improves the perceptual quality of rendered images. However, obtaining such distributions remains an open problem with high sample counts and highdimensional rendering integrals. In this paper, we introduce a temporal algorithm that aims at overcoming these limitations. Our algorithm is applicable whenever multiple frames are rendered, typically for animated sequences or interactive applications. Our algorithm locally permutes the pixel sequences (represented by their seeds) to improve the error distribution across frames. Our approach works regardless of the sample count or the dimensionality and significantly improves the images in low-varying screen-space regions under coherent motion. Furthermore, it adds negligible overhead compared to the rendering times.Item One-to-Many Spectral Upsampling of Reflectances and Transmittances(The Eurographics Association and John Wiley & Sons Ltd., 2023) Belcour, Laurent; Barla, Pascal; Guennebaud, Gaël; Ritschel, Tobias; Weidlich, AndreaSpectral rendering is essential for the production of physically-plausible synthetic images, but requires to introduce several changes in the content generation pipeline. In particular, the authoring of spectral material properties (e.g., albedo maps, indices of refraction, transmittance coefficients) raises new problems. While a large panel of computer graphics methods exists to upsample a RGB color to a spectrum, they all provide a one-to-one mapping. This limits the ability to control interesting color changes such as the Usambara effect or metameric spectra. In this work, we introduce a one-to-many mapping in which we show how we can explore the set of all spectra reproducing a given input color. We apply this method to different colour changing effects such as vathochromism - the change of color with depth, and metamerism.Item Real-Time Rendering of Glinty Appearances using Distributed Binomial Laws on Anisotropic Grids(The Eurographics Association and John Wiley & Sons Ltd., 2023) Deliot, Thomas; Belcour, Laurent; Bikker, Jacco; Gribble, ChristiaanIn this work, we render in real-time glittery materials caused by discrete flakes on the surface. To achieve this, one has to count the number of flakes reflecting the light towards the camera within every texel covered by a given pixel footprint. To do so, we derive a counting method for arbitrary footprints that, unlike previous work, outputs the correct statistics. We combine this counting method with an anisotropic parameterization of the texture space that reduces the number of texels falling under a pixel footprint. This allows our method to run with both stable performance and 1.5× to 5× faster than the state-of-the-art.