Rendering 2021 - DL-only Track
Permanent URI for this collection
Browse
Browsing Rendering 2021 - DL-only Track by Subject "Computing methodologies"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item A Compact Representation for Fluorescent Spectral Data(The Eurographics Association, 2021) Hua, Qingqin; Fichet, Alban; Wilkie, Alexander; Bousseau, Adrien and McGuire, MorganWe propose a technique to efficiently importance sample and store fluorescent spectral data. Fluorescence behaviour is properly represented as a re-radiation matrix: for a given input wavelength, this matrix indicates how much energy is re-emitted at all other wavelengths. However, such a 2D representation has a significant memory footprint, especially when a scene contains a high number of fluorescent objects, or fluorescent textures. We propose to use Gaussian Mixture Domain to model re-radiation, which allows us to significantly reduce the memory footprint. Instead of storing the full matrix, we work with a set of Gaussian parameters that also allow direct importance sampling. When accuracy is a concern, one can still use the re-radiation matrix data, and just benefit from importance sampling provided by the Gaussian Mixture. Our method is useful when numerous fluorescent materials are present in a scene, an in particular for textures with fluorescent components.Item Fast Analytic Soft Shadows from Area Lights(The Eurographics Association, 2021) Kt, Aakash; Sakurikar, Parikshit; Narayanan, P. J.; Bousseau, Adrien and McGuire, MorganIn this paper, we present the first method to analytically compute shading and soft shadows for physically based BRDFs from arbitrary area lights.We observe that for a given shading point, shadowed radiance can be computed by analytically integrating over the visible portion of the light source using Linearly Transformed Cosines (LTCs). We present a structured approach to project, re-order and horizon-clip spherical polygons of arbitrary lights and occluders. The visible portion is then computed by multiple repetitive set difference operations. Our method produces noise-free shading and soft-shadows and outperforms raytracing within the same compute budget. We further optimize our algorithm for convex light and occluder meshes by projecting the silhouette edges as viewed from the shading point to a spherical polygon, and performing one set difference operation thereby achieving a speedup of more than 2x. We analyze the run-time performance of our method and show rendering results on several scenes with multiple light sources and complex occluders. We demonstrate superior results compared to prior work that uses analytic shading with stochastic shadow computation for area lights.Item Firefly Removal in Monte Carlo Rendering with Adaptive Median of meaNs(The Eurographics Association, 2021) Buisine, Jérôme; Delepoulle, Samuel; Renaud, Christophe; Bousseau, Adrien and McGuire, MorganEstimating the rendering equation using Monte Carlo methods produces photorealistic images by evaluating a large number of samples of the rendering equation per pixel. The final value for each pixel is then calculated as the average of the contribution of each sample. The mean is a good estimator, but not necessarily robust which explains the appearance of some visual artifacts such as fireflies, due to an overestimation of the value of the mean. The MoN (Median of meaNs) is a more robust estimator than the mean which allows to reduce the impact of outliers which are the cause of these fireflies. However, this method converges more slowly than the mean, which reduces its interest for pixels whose distribution does not contain outliers. To overcome this problem we propose an extension of the MoN based on the Gini coefficient in order to exploit the best of the two estimators during the computation. This approach is simple to implement whatever the integrator and does not require complex parameterization. Finally, it presents a reduced computational overhead and leads to the disappearance of fireflies.Item A Low-Dimensional Perceptual Space for Intuitive BRDF Editing(The Eurographics Association, 2021) Shi, Weiqi; Wang, Zeyu; Soler, Cyril; Rushmeier, Holly; Bousseau, Adrien and McGuire, MorganUnderstanding and characterizing material appearance based on human perception is challenging because of the highdimensionality and nonlinearity of reflectance data. We refer to the process of identifying specific characteristics of material appearance within the same category as material estimation, in contrast to material categorization which focuses on identifying inter-category differences [FNG15]. In this paper, we present a method to simulate the material estimation process based on human perception. We create a continuous perceptual space for measured tabulated data based on its underlying low-dimensional manifold. Unlike many previous works that only address individual perceptual attributes (such as gloss), we focus on extracting all possible dimensions that can explain the perceived differences between appearances. Additionally, we propose a new material editing interface that combines image navigation and sliders to visualize each perceptual dimension and facilitate the editing of tabulated BRDFs. We conduct a user study to evaluate the efficacy of the perceptual space and the interface in terms of appearance matching.Item MatMorpher: A Morphing Operator for SVBRDFs(The Eurographics Association, 2021) Gauthier, Alban; Thiery, Jean-Marc; Boubekeur, Tamy; Bousseau, Adrien and McGuire, MorganWe present a novel morphing operator for spatially-varying bidirectional reflectance distribution functions. Our operator takes as input digital materials modeled using a set of 2D texture maps which control the typical parameters of a standard BRDF model. It also takes an interpolation map, defined over the same texture domain, which modulates the interpolation at each texel of the material. Our algorithm is based on a transport mechanism which continuously transforms the individual source maps into their destination counterparts in a feature-sensitive manner. The underlying non-rigid deformation is computed using an energy minimization over a transport grid and accounts for the user-selected dominant features present in the input materials. During this process, we carefully preserve details by mixing the material channels using a histogram-aware color blending combined with a normal reorientation. As a result, our method allows to explore large regions of the space of possible materials using exemplars as anchors and our interpolation scheme as a navigation mean. We also give details about our real time implementation, designed to map faithfully to the standard physically-based rendering workflow and letting users rule interactively the morphing process.Item Modeling Surround-aware Contrast Sensitivity(The Eurographics Association, 2021) Yi, Shinyoung; Jeon, Daniel S.; Serrano, Ana; Jeong, Se-Yoon; Kim, Hui-Yong; Gutierrez, Diego; Kim, Min H.; Bousseau, Adrien and McGuire, MorganDespite advances in display technology, many existing applications rely on psychophysical datasets of human perception gathered using older, sometimes outdated displays. As a result, there exists the underlying assumption that such measurements can be carried over to the new viewing conditions of more modern technology. We have conducted a series of psychophysical experiments to explore contrast sensitivity using a state-of-the-art HDR display, taking into account not only the spatial frequency and luminance of the stimuli but also their surrounding luminance levels. From our data, we have derived a novel surroundaware contrast sensitivity function (CSF), which predicts human contrast sensitivity more accurately. We additionally provide a practical version that retains the benefits of our full model, while enabling easy backward compatibility and consistently producing good results across many existing applications that make use of CSF models. We show examples of effective HDR video compression using a transfer function derived from our CSF, tone-mapping, and improved accuracy in visual difference prediction.Item Moment-based Constrained Spectral Uplifting(The Eurographics Association, 2021) Tódová, Lucia; Wilkie, Alexander; Fascione, Luca; Bousseau, Adrien and McGuire, MorganSpectral rendering is increasingly used in appearance-critical rendering workflows due to its ability to predict colour values under varying illuminants. However, directly modelling assets via input of spectral data is a tedious process: and if asset appearance is defined via artist-created textures, these are drawn in colour space, i.e. RGB. Converting these RGB values to equivalent spectral representations is an ambiguous problem, for which robust techniques have been proposed only comparatively recently. However, other than the resulting RGB values matching under the illuminant the RGB space is defined for (usually D65), these uplifting techniques do not provide the user with further control over the resulting spectral shape. We propose a method for constraining the spectral uplifting process so that for a finite number of input spectra that need to be preserved, it always yields the correct uplifted spectrum for the corresponding RGB value. Due to constraints placed on the uplifting process, target RGB values that are in close proximity to one another uplift to spectra within the same metameric family, so that textures with colour variations can be meaningfully uplifted. Renderings uplifted via our method show minimal discrepancies when compared to the original objects.Item NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting(The Eurographics Association, 2021) Sun, Tiancheng; Lin, Kai-En; Bi, Sai; Xu, Zexiang; Ramamoorthi, Ravi; Bousseau, Adrien and McGuire, MorganHuman portraits exhibit various appearances when observed from different views under different lighting conditions. We can easily imagine how the face will look like in another setup, but computer algorithms still fail on this problem given limited observations. To this end, we present a system for portrait view synthesis and relighting: given multiple portraits, we use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting. Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions. Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.Item Single-image Full-body Human Relighting(The Eurographics Association, 2021) Lagunas, Manuel; Sun, Xin; Yang, Jimei; Villegas, Ruben; Zhang, Jianming; Shu, Zhixin; Masia, Belen; Gutierrez, Diego; Bousseau, Adrien and McGuire, MorganWe present a single-image data-driven method to automatically relight images with full-body humans in them. Our framework is based on a realistic scene decomposition leveraging precomputed radiance transfer (PRT) and spherical harmonics (SH) lighting. In contrast to previous work, we lift the assumptions on Lambertian materials and explicitly model diffuse and specular reflectance in our data. Moreover, we introduce an additional light-dependent residual term that accounts for errors in the PRTbased image reconstruction. We propose a new deep learning architecture, tailored to the decomposition performed in PRT, that is trained using a combination of L1, logarithmic, and rendering losses. Our model outperforms the state of the art for full-body human relighting both with synthetic images and photographs.