Rendering 2021 - DL-only Track
Permanent URI for this collection
Browse
Browsing Rendering 2021 - DL-only Track by Issue Date
Now showing 1 - 20 of 22
Results Per Page
Sort Options
Item Material and Lighting Reconstruction for Complex Indoor Scenes with Texture-space Differentiable Rendering(The Eurographics Association, 2021) Nimier-David, Merlin; Dong, Zhao; Jakob, Wenzel; Kaplanyan, Anton; Bousseau, Adrien and McGuire, MorganModern geometric reconstruction techniques achieve impressive levels of accuracy in indoor environments. However, such captured data typically keeps lighting and materials entangled. It is then impossible to manipulate the resulting scenes in photorealistic settings, such as augmented / mixed reality and robotics simulation. Moreover, various imperfections in the captured data, such as missing detailed geometry, camera misalignment, uneven coverage of observations, etc., pose challenges for scene recovery. To address these challenges, we present a robust optimization pipeline based on differentiable rendering to recover physically based materials and illumination, leveraging RGB and geometry captures. We introduce a novel texture-space sampling technique and carefully chosen inductive priors to help guide reconstruction, avoiding low-quality or implausible local minima. Our approach enables robust and high-resolution reconstruction of complex materials and illumination in captured indoor scenes. This enables a variety of applications including novel view synthesis, scene editing, local & global relighting, synthetic data augmentation, and other photorealistic manipulations.Item Zero-variance Transmittance Estimation(The Eurographics Association, 2021) d'Eon, Eugene; Novák, Jan; Bousseau, Adrien and McGuire, MorganWe apply zero-variance theory to the Volterra integral formulation of volumetric transmittance.We solve for the guided sampling decisions in this framework that produce zero-variance ratio tracking and next-flight ratio tracking estimators. In both cases, a zero-variance estimate arises by colliding only with the null particles along the interval. For ratio tracking, this is equivalent to residual ratio tracking with a perfect control. The next-flight zero-variance estimator is of the collision type and can only produce zero-variance estimates if the random walk never terminates. In drawing these new connections, we enrich the theory of Monte Carlo transmittance estimation and provide a new rigorous path-stretching interpretation of residual ratio tracking.Item A Low-Dimensional Perceptual Space for Intuitive BRDF Editing(The Eurographics Association, 2021) Shi, Weiqi; Wang, Zeyu; Soler, Cyril; Rushmeier, Holly; Bousseau, Adrien and McGuire, MorganUnderstanding and characterizing material appearance based on human perception is challenging because of the highdimensionality and nonlinearity of reflectance data. We refer to the process of identifying specific characteristics of material appearance within the same category as material estimation, in contrast to material categorization which focuses on identifying inter-category differences [FNG15]. In this paper, we present a method to simulate the material estimation process based on human perception. We create a continuous perceptual space for measured tabulated data based on its underlying low-dimensional manifold. Unlike many previous works that only address individual perceptual attributes (such as gloss), we focus on extracting all possible dimensions that can explain the perceived differences between appearances. Additionally, we propose a new material editing interface that combines image navigation and sliders to visualize each perceptual dimension and facilitate the editing of tabulated BRDFs. We conduct a user study to evaluate the efficacy of the perceptual space and the interface in terms of appearance matching.Item A Compact Representation for Fluorescent Spectral Data(The Eurographics Association, 2021) Hua, Qingqin; Fichet, Alban; Wilkie, Alexander; Bousseau, Adrien and McGuire, MorganWe propose a technique to efficiently importance sample and store fluorescent spectral data. Fluorescence behaviour is properly represented as a re-radiation matrix: for a given input wavelength, this matrix indicates how much energy is re-emitted at all other wavelengths. However, such a 2D representation has a significant memory footprint, especially when a scene contains a high number of fluorescent objects, or fluorescent textures. We propose to use Gaussian Mixture Domain to model re-radiation, which allows us to significantly reduce the memory footprint. Instead of storing the full matrix, we work with a set of Gaussian parameters that also allow direct importance sampling. When accuracy is a concern, one can still use the re-radiation matrix data, and just benefit from importance sampling provided by the Gaussian Mixture. Our method is useful when numerous fluorescent materials are present in a scene, an in particular for textures with fluorescent components.Item Human Hair Inverse Rendering using Multi-View Photometric data(The Eurographics Association, 2021) Sun, Tiancheng; Nam, Giljoo; Aliaga, Carlos; Hery, Christophe; Ramamoorthi, Ravi; Bousseau, Adrien and McGuire, MorganWe introduce a hair inverse rendering framework to reconstruct high-fidelity 3D geometry of human hair, as well as its reflectance, which can be readily used for photorealistic rendering of hair. We take multi-view photometric data as input, i.e., a set of images taken from various viewpoints and different lighting conditions. Our method consists of two stages. First, we propose a novel solution for line-based multi-view stereo that yields accurate hair geometry from multi-view photometric data. Specifically, a per-pixel lightcode is proposed to efficiently solve the hair correspondence matching problem. Our new solution enables accurate and dense strand reconstruction from a smaller number of cameras compared to the state-of-the-art work. In the second stage, we estimate hair reflectance properties using multi-view photometric data. A simplified BSDF model of hair strands is used for realistic appearance reproduction. Based on the 3D geometry of hair strands, we fit the longitudinal roughness and find the single strand color. We show that our method can faithfully reproduce the appearance of human hair and provide realism for digital humans. We demonstrate the accuracy and efficiency of our method using photorealistic synthetic hair rendering data.Item Single-image Full-body Human Relighting(The Eurographics Association, 2021) Lagunas, Manuel; Sun, Xin; Yang, Jimei; Villegas, Ruben; Zhang, Jianming; Shu, Zhixin; Masia, Belen; Gutierrez, Diego; Bousseau, Adrien and McGuire, MorganWe present a single-image data-driven method to automatically relight images with full-body humans in them. Our framework is based on a realistic scene decomposition leveraging precomputed radiance transfer (PRT) and spherical harmonics (SH) lighting. In contrast to previous work, we lift the assumptions on Lambertian materials and explicitly model diffuse and specular reflectance in our data. Moreover, we introduce an additional light-dependent residual term that accounts for errors in the PRTbased image reconstruction. We propose a new deep learning architecture, tailored to the decomposition performed in PRT, that is trained using a combination of L1, logarithmic, and rendering losses. Our model outperforms the state of the art for full-body human relighting both with synthetic images and photographs.Item Semantic-Aware Generative Approach for Image Inpainting(The Eurographics Association, 2021) Chanda, Deepankar; Kalantari, Nima Khademi; Bousseau, Adrien and McGuire, MorganWe propose a semantic-aware generative method for image inpainting. Specifically, we divide the inpainting process into two tasks; estimating the semantic information inside the masked areas and inpainting these regions using the semantic information. To effectively utilize the semantic information, we inject them into the generator through conditional feature modulation. Furthermore, we introduce an adversarial framework with dual discriminators to train our generator. In our system, an input consistency discriminator evaluates the inpainted region to best match the surrounding unmasked areas and a semantic consistency discriminator assesses whether the generated image is consistent with the semantic labels. To obtain the complete input semantic map, we first use a pre-trained network to compute the semantic map in the unmasked areas and inpaint it using a network trained in an adversarial manner. We compare our approach against state-of-the-art methods and show significant improvement in the visual quality of the results. Furthermore, we demonstrate the ability of our system to generate user-desired results by allowing a user to manually edit the estimated semantic map.Item Practical Product Sampling for Single Scattering in Media(The Eurographics Association, 2021) Villeneuve, Keven; Gruson, Adrien; Georgiev, Iliyan; Nowrouzezahrai, Derek; Bousseau, Adrien and McGuire, MorganEfficient Monte-Carlo estimation of volumetric single scattering remains challenging due to various sources of variance, including transmittance, phase-function anisotropy, geometric cosine foreshortening, and squared-distance fall-off. We propose several complementary techniques to importance sample each of these terms and their product. First, we introduce an extension to equi-angular sampling to analytically account for the foreshortening at point-normal emitters. We then include transmittance and phase function via Taylor-series expansion and/or warp composition. Scaling to complex mesh emitters is achieved through an adaptive tree-splitting scheme. We show improved performance over state-of-the-art baselines in a diversity of scenarios.Item Firefly Removal in Monte Carlo Rendering with Adaptive Median of meaNs(The Eurographics Association, 2021) Buisine, Jérôme; Delepoulle, Samuel; Renaud, Christophe; Bousseau, Adrien and McGuire, MorganEstimating the rendering equation using Monte Carlo methods produces photorealistic images by evaluating a large number of samples of the rendering equation per pixel. The final value for each pixel is then calculated as the average of the contribution of each sample. The mean is a good estimator, but not necessarily robust which explains the appearance of some visual artifacts such as fireflies, due to an overestimation of the value of the mean. The MoN (Median of meaNs) is a more robust estimator than the mean which allows to reduce the impact of outliers which are the cause of these fireflies. However, this method converges more slowly than the mean, which reduces its interest for pixels whose distribution does not contain outliers. To overcome this problem we propose an extension of the MoN based on the Gini coefficient in order to exploit the best of the two estimators during the computation. This approach is simple to implement whatever the integrator and does not require complex parameterization. Finally, it presents a reduced computational overhead and leads to the disappearance of fireflies.Item MatMorpher: A Morphing Operator for SVBRDFs(The Eurographics Association, 2021) Gauthier, Alban; Thiery, Jean-Marc; Boubekeur, Tamy; Bousseau, Adrien and McGuire, MorganWe present a novel morphing operator for spatially-varying bidirectional reflectance distribution functions. Our operator takes as input digital materials modeled using a set of 2D texture maps which control the typical parameters of a standard BRDF model. It also takes an interpolation map, defined over the same texture domain, which modulates the interpolation at each texel of the material. Our algorithm is based on a transport mechanism which continuously transforms the individual source maps into their destination counterparts in a feature-sensitive manner. The underlying non-rigid deformation is computed using an energy minimization over a transport grid and accounts for the user-selected dominant features present in the input materials. During this process, we carefully preserve details by mixing the material channels using a histogram-aware color blending combined with a normal reorientation. As a result, our method allows to explore large regions of the space of possible materials using exemplars as anchors and our interpolation scheme as a navigation mean. We also give details about our real time implementation, designed to map faithfully to the standard physically-based rendering workflow and letting users rule interactively the morphing process.Item Practical Ply-Based Appearance Modeling for Knitted Fabrics(The Eurographics Association, 2021) Montazeri, Zahra; Gammelmark, Søren; Jensen, Henrik Wann; Zhao, Shuang; Bousseau, Adrien and McGuire, MorganAbstract Modeling the geometry and the appearance of knitted fabrics has been challenging due to their complex geometries and interactions with light. Previous surface-based models have difficulties capturing fine-grained knit geometries; Micro-appearance models, on the other hands, typically store individual cloth fibers explicitly and are expensive to be generated and rendered. Further, neither of the models offers the flexibility to accurately capture both the reflection and the transmission of light simultaneously. In this paper, we introduce an efficient technique to generate knit models with user-specified knitting patterns. Our model stores individual knit plies with fiber-level detailed depicted using normal and tangent mapping. We evaluate our generated models using a wide array of knitting patterns. Further, we compare qualitatively renderings to our models to photos of real samples.Item NeRF-Tex: Neural Reflectance Field Textures(The Eurographics Association, 2021) Baatz, Hendrik; Granskog, Jonathan; Papas, Marios; Rousselle, Fabrice; Novák, Jan; Bousseau, Adrien and McGuire, MorganWe investigate the use of neural fields for modeling diverse mesoscale structures, such as fur, fabric, and grass. Instead of using classical graphics primitives to model the structure, we propose to employ a versatile volumetric primitive represented by a neural reflectance field (NeRF-Tex), which jointly models the geometry of the material and its response to lighting. The NeRF-Tex primitive can be instantiated over a base mesh to ''texture'' it with the desired meso and microscale appearance. We condition the reflectance field on user-defined parameters that control the appearance. A single NeRF texture thus captures an entire space of reflectance fields rather than one specific structure. This increases the gamut of appearances that can be modeled and provides a solution for combating repetitive texturing artifacts. We also demonstrate that NeRF textures naturally facilitate continuous level-of-detail rendering. Our approach unites the versatility and modeling power of neural networks with the artistic control needed for precise modeling of virtual scenes. While all our training data is currently synthetic, our work provides a recipe that can be further extended to extract complex, hard-to-model appearances from real images.Item Sampling Clear Sky Models using Truncated Gaussian Mixtures(The Eurographics Association, 2021) Vitsas, Nick; Vardis, Konstantinos; Papaioannou, Georgios; Bousseau, Adrien and McGuire, MorganParametric clear sky models are often represented by simple analytic expressions that can efficiently generate plausible, natural radiance maps of the sky, taking into account expensive and hard to simulate atmospheric phenomena. In this work, we show how such models can be complemented by an equally simple, elegant and generic analytic continuous probability density function (PDF) that provides a very good approximation to the radiance-based distribution of the sky. We describe a fitting process that is used to properly parameterise a truncated Gaussian mixture model, which allows for exact, constant-time and minimal-memory sampling and evaluation of this PDF, without rejection sampling, an important property for practical applications in offline and real-time rendering. We present experiments in a standard importance sampling framework that showcase variance reduction approaching that of a more expensive inversion sampling method using Summed Area Tables.Item Fast Analytic Soft Shadows from Area Lights(The Eurographics Association, 2021) Kt, Aakash; Sakurikar, Parikshit; Narayanan, P. J.; Bousseau, Adrien and McGuire, MorganIn this paper, we present the first method to analytically compute shading and soft shadows for physically based BRDFs from arbitrary area lights.We observe that for a given shading point, shadowed radiance can be computed by analytically integrating over the visible portion of the light source using Linearly Transformed Cosines (LTCs). We present a structured approach to project, re-order and horizon-clip spherical polygons of arbitrary lights and occluders. The visible portion is then computed by multiple repetitive set difference operations. Our method produces noise-free shading and soft-shadows and outperforms raytracing within the same compute budget. We further optimize our algorithm for convex light and occluder meshes by projecting the silhouette edges as viewed from the shading point to a spherical polygon, and performing one set difference operation thereby achieving a speedup of more than 2x. We analyze the run-time performance of our method and show rendering results on several scenes with multiple light sources and complex occluders. We demonstrate superior results compared to prior work that uses analytic shading with stochastic shadow computation for area lights.Item Fast Polygonal Splatting using Directional Kernel Difference(The Eurographics Association, 2021) Moroto, Yuji; Hachisuka, Toshiya; Umetani, Nobuyuki; Bousseau, Adrien and McGuire, MorganDepth-of-field (DoF) filtering is an important image-processing task for producing blurred images similar to those obtained with a large aperture camera lens. DoF filtering applies an image convolution with a spatially varying kernel and is thus computationally expensive, even on modern computational hardware. In this paper, we introduce an approach for fast and accurate DoF filtering for polygonal kernels, where the value is constant inside the kernel. Our approach is an extension of the existing approach based on discrete differenced kernels. The performance gain here hinges upon the fact that kernels typically become sparse (i.e., mostly zero) when taking the difference. We extended the existing approach to conventional axis-aligned differences to non-axis-aligned differences. The key insight is that taking such differences along the directions of the edges makes polygonal kernels significantly sparser than just taking the difference along the axis-aligned directions, as in existing studies. Compared to a naive image convolution, we achieve an order of magnitude speedup, allowing a real-time application of polygonal kernels even on high-resolution images.Item Rendering 2021 DL Track: Frontmatter(The Eurographics Association, 2021) Bousseau, Adrien; McGuire, Morgan; Bousseau, Adrien and McGuire, MorganItem Moment-based Constrained Spectral Uplifting(The Eurographics Association, 2021) Tódová, Lucia; Wilkie, Alexander; Fascione, Luca; Bousseau, Adrien and McGuire, MorganSpectral rendering is increasingly used in appearance-critical rendering workflows due to its ability to predict colour values under varying illuminants. However, directly modelling assets via input of spectral data is a tedious process: and if asset appearance is defined via artist-created textures, these are drawn in colour space, i.e. RGB. Converting these RGB values to equivalent spectral representations is an ambiguous problem, for which robust techniques have been proposed only comparatively recently. However, other than the resulting RGB values matching under the illuminant the RGB space is defined for (usually D65), these uplifting techniques do not provide the user with further control over the resulting spectral shape. We propose a method for constraining the spectral uplifting process so that for a finite number of input spectra that need to be preserved, it always yields the correct uplifted spectrum for the corresponding RGB value. Due to constraints placed on the uplifting process, target RGB values that are in close proximity to one another uplift to spectra within the same metameric family, so that textures with colour variations can be meaningfully uplifted. Renderings uplifted via our method show minimal discrepancies when compared to the original objects.Item NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting(The Eurographics Association, 2021) Sun, Tiancheng; Lin, Kai-En; Bi, Sai; Xu, Zexiang; Ramamoorthi, Ravi; Bousseau, Adrien and McGuire, MorganHuman portraits exhibit various appearances when observed from different views under different lighting conditions. We can easily imagine how the face will look like in another setup, but computer algorithms still fail on this problem given limited observations. To this end, we present a system for portrait view synthesis and relighting: given multiple portraits, we use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting. Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions. Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.Item Appearance-Driven Automatic 3D Model Simplification(The Eurographics Association, 2021) Hasselgren, Jon; Munkberg, Jacob; Lehtinen, Jaakko; Aittala, Miika; Laine, Samuli; Bousseau, Adrien and McGuire, MorganWe present a suite of techniques for jointly optimizing triangle meshes and shading models to match the appearance of reference scenes. This capability has a number of uses, including appearance-preserving simplification of extremely complex assets, conversion between rendering systems, and even conversion between geometric scene representations. We follow and extend the classic analysis-by-synthesis family of techniques: enabled by a highly efficient differentiable renderer and modern nonlinear optimization algorithms, our results are driven to minimize the image-space difference to the target scene when rendered in similar viewing and lighting conditions. As the only signals driving the optimization are differences in rendered images, the approach is highly general and versatile: it easily supports many different forward rendering models such as normal mapping, spatially-varying BRDFs, displacement mapping, etc. Supervision through images only is also key to the ability to easily convert between rendering systems and scene representations. We output triangle meshes with textured materials to ensure that the models render efficiently on modern graphics hardware and benefit from, e.g., hardware-accelerated rasterization, ray tracing, and filtered texture lookups. Our system is integrated in a small Python code base, and can be applied at high resolutions and on large models. We describe several use cases, including mesh decimation, level of detail generation, seamless mesh filtering and approximations of aggregate geometry.Item Modeling Surround-aware Contrast Sensitivity(The Eurographics Association, 2021) Yi, Shinyoung; Jeon, Daniel S.; Serrano, Ana; Jeong, Se-Yoon; Kim, Hui-Yong; Gutierrez, Diego; Kim, Min H.; Bousseau, Adrien and McGuire, MorganDespite advances in display technology, many existing applications rely on psychophysical datasets of human perception gathered using older, sometimes outdated displays. As a result, there exists the underlying assumption that such measurements can be carried over to the new viewing conditions of more modern technology. We have conducted a series of psychophysical experiments to explore contrast sensitivity using a state-of-the-art HDR display, taking into account not only the spatial frequency and luminance of the stimuli but also their surrounding luminance levels. From our data, we have derived a novel surroundaware contrast sensitivity function (CSF), which predicts human contrast sensitivity more accurately. We additionally provide a practical version that retains the benefits of our full model, while enabling easy backward compatibility and consistently producing good results across many existing applications that make use of CSF models. We show examples of effective HDR video compression using a transfer function derived from our CSF, tone-mapping, and improved accuracy in visual difference prediction.