Browsing by Author "Aliaga, Carlos"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Accelerating Hair Rendering by Learning High-Order Scattered Radiance(The Eurographics Association and John Wiley & Sons Ltd., 2023) KT, Aakash; Jarabo, Adrian; Aliaga, Carlos; Chiang, Matt Jen-Yuan; Maury, Olivier; Hery, Christophe; Narayanan, P. J.; Nam, Giljoo; Ritschel, Tobias; Weidlich, AndreaEfficiently and accurately rendering hair accounting for multiple scattering is a challenging open problem. Path tracing in hair takes long to converge while other techniques are either too approximate while still being computationally expensive or make assumptions about the scene. We present a technique to infer the higher order scattering in hair in constant time within the path tracing framework, while achieving better computational efficiency. Our method makes no assumptions about the scene and provides control over the renderer's bias & speedup. We achieve this by training a small multilayer perceptron (MLP) to learn the higher-order radiance online, while rendering progresses. We describe how to robustly train this network and thoroughly analyze our resulting renderer's characteristics. We evaluate our method on various hairstyles and lighting conditions. We also compare our method against a recent learning based & a traditional real-time hair rendering method and demonstrate better quantitative & qualitative results. Our method achieves a significant improvement in speed with respect to path tracing, achieving a run-time reduction of 40%-70% while only introducing a small amount of bias.Item Human Hair Inverse Rendering using Multi-View Photometric data(The Eurographics Association, 2021) Sun, Tiancheng; Nam, Giljoo; Aliaga, Carlos; Hery, Christophe; Ramamoorthi, Ravi; Bousseau, Adrien and McGuire, MorganWe introduce a hair inverse rendering framework to reconstruct high-fidelity 3D geometry of human hair, as well as its reflectance, which can be readily used for photorealistic rendering of hair. We take multi-view photometric data as input, i.e., a set of images taken from various viewpoints and different lighting conditions. Our method consists of two stages. First, we propose a novel solution for line-based multi-view stereo that yields accurate hair geometry from multi-view photometric data. Specifically, a per-pixel lightcode is proposed to efficiently solve the hair correspondence matching problem. Our new solution enables accurate and dense strand reconstruction from a smaller number of cameras compared to the state-of-the-art work. In the second stage, we estimate hair reflectance properties using multi-view photometric data. A simplified BSDF model of hair strands is used for realistic appearance reproduction. Based on the 3D geometry of hair strands, we fit the longitudinal roughness and find the single strand color. We show that our method can faithfully reproduce the appearance of human hair and provide realism for digital humans. We demonstrate the accuracy and efficiency of our method using photorealistic synthetic hair rendering data.Item A Hyperspectral Space of Skin Tones for Inverse Rendering of Biophysical Skin Properties(The Eurographics Association and John Wiley & Sons Ltd., 2023) Aliaga, Carlos; Xia, Mengqi; Xie, Hao; Jarabo, Adrian; Braun, Gustav; Hery, Christophe; Ritschel, Tobias; Weidlich, AndreaWe present a method for estimating the main properties of human skin, leveraging a hyperspectral dataset of skin tones synthetically generated through a biophysical layered skin model and Monte Carlo light transport simulations. Our approach learns the mapping between the skin parameters and diffuse skin reflectance in such space through an encoder-decoder network. We assess the performance of RGB and spectral reflectance up to 1 µm, allowing the model to retrieve visible and near-infrared. Instead of restricting the parameters to values in the ranges reported in medical literature, we allow the model to exceed such ranges to gain expressiveness to recover outliers like beard, eyebrows, rushes and other imperfections. The continuity of our albedo space allows to recover smooth textures of skin properties, enabling reflectance manipulations by meaningful edits of the skin properties. The space is robust under different illumination conditions, and presents high spectral similarity with the current largest datasets of spectral measurements of real human skin while expanding its gamut.