42-Issue 4
Permanent URI for this collection
Browse
Browsing 42-Issue 4 by Subject "Computing methodologies"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Accelerating Hair Rendering by Learning High-Order Scattered Radiance(The Eurographics Association and John Wiley & Sons Ltd., 2023) KT, Aakash; Jarabo, Adrian; Aliaga, Carlos; Chiang, Matt Jen-Yuan; Maury, Olivier; Hery, Christophe; Narayanan, P. J.; Nam, Giljoo; Ritschel, Tobias; Weidlich, AndreaEfficiently and accurately rendering hair accounting for multiple scattering is a challenging open problem. Path tracing in hair takes long to converge while other techniques are either too approximate while still being computationally expensive or make assumptions about the scene. We present a technique to infer the higher order scattering in hair in constant time within the path tracing framework, while achieving better computational efficiency. Our method makes no assumptions about the scene and provides control over the renderer's bias & speedup. We achieve this by training a small multilayer perceptron (MLP) to learn the higher-order radiance online, while rendering progresses. We describe how to robustly train this network and thoroughly analyze our resulting renderer's characteristics. We evaluate our method on various hairstyles and lighting conditions. We also compare our method against a recent learning based & a traditional real-time hair rendering method and demonstrate better quantitative & qualitative results. Our method achieves a significant improvement in speed with respect to path tracing, achieving a run-time reduction of 40%-70% while only introducing a small amount of bias.Item A Hyperspectral Space of Skin Tones for Inverse Rendering of Biophysical Skin Properties(The Eurographics Association and John Wiley & Sons Ltd., 2023) Aliaga, Carlos; Xia, Mengqi; Xie, Hao; Jarabo, Adrian; Braun, Gustav; Hery, Christophe; Ritschel, Tobias; Weidlich, AndreaWe present a method for estimating the main properties of human skin, leveraging a hyperspectral dataset of skin tones synthetically generated through a biophysical layered skin model and Monte Carlo light transport simulations. Our approach learns the mapping between the skin parameters and diffuse skin reflectance in such space through an encoder-decoder network. We assess the performance of RGB and spectral reflectance up to 1 µm, allowing the model to retrieve visible and near-infrared. Instead of restricting the parameters to values in the ranges reported in medical literature, we allow the model to exceed such ranges to gain expressiveness to recover outliers like beard, eyebrows, rushes and other imperfections. The continuity of our albedo space allows to recover smooth textures of skin properties, enabling reflectance manipulations by meaningful edits of the skin properties. The space is robust under different illumination conditions, and presents high spectral similarity with the current largest datasets of spectral measurements of real human skin while expanding its gamut.Item Interactive Control over Temporal Consistency while Stylizing Video Streams(The Eurographics Association and John Wiley & Sons Ltd., 2023) Shekhar, Sumit; Reimann, Max; Hilscher, Moritz; Semmo, Amir; Döllner, Jürgen; Trapp, Matthias; Ritschel, Tobias; Weidlich, AndreaImage stylization has seen significant advancement and widespread interest over the years, leading to the development of a multitude of techniques. Extending these stylization techniques, such as Neural Style Transfer (NST), to videos is often achieved by applying them on a per-frame basis. However, per-frame stylization usually lacks temporal consistency, expressed by undesirable flickering artifacts. Most of the existing approaches for enforcing temporal consistency suffer from one or more of the following drawbacks: They (1) are only suitable for a limited range of techniques, (2) do not support online processing as they require the complete video as input, (3) cannot provide consistency for the task of stylization, or (4) do not provide interactive consistency control. Domain-agnostic techniques for temporal consistency aim to eradicate flickering completely but typically disregard aesthetic aspects. For stylization tasks, however, consistency control is an essential requirement as a certain amount of flickering adds to the artistic look and feel. Moreover, making this control interactive is paramount from a usability perspective. To achieve the above requirements, we propose an approach that stylizes video streams in real-time at full HD resolutions while providing interactive consistency control. We develop a lite optical-flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy. Further, we employ an adaptive combination of local and global consistency features and enable interactive selection between them. Objective and subjective evaluations demonstrate that our method is superior to state-of-the-art video consistency approaches. maxreimann.github.io/stream-consistencyItem Iridescent Water Droplets Beyond Mie Scattering(The Eurographics Association and John Wiley & Sons Ltd., 2023) Xia, Mengqi (Mandy); Walter, Bruce; Marschner, Steve; Ritschel, Tobias; Weidlich, AndreaLooking at a cup of hot tea, an observer can see color patterns and granular textures both on the water surface and in the steam. Motivated by this example, we model the appearance of iridescent water droplets. Mie scattering describes the scattering of light waves by individual spherical particles and is the building block for both effects, but we show that other mechanisms must also be considered in order to faithfully reproduce the appearance. Iridescence on the water surface is caused by droplets levitating above the surface, and interference between light scattered by drops and reflected by the water surface, known as Quetelet scattering, is essential to producing the color. We propose a model, new to computer graphics, for rendering this phenomenon, which we validate against photographs. For iridescent steam, we show that variation in droplet size is essential to the characteristic color patterns. We build a droplet growth model and apply it as a post-processing step to an existing computer graphics fluid simulation to compute collections of particles for rendering. We significantly accelerate the rendering of sparse particles with motion blur by intersecting rays with particle trajectories, blending contributions along viewing rays. Our model reproduces the distinctive color patterns correlated with the steam flow. For both effects, we instantiate individual droplets and render them explicitly, since the granularity of droplets is readily observed in reality, and demonstrate that Mie scattering alone cannot reproduce the visual appearance.Item LoCoPalettes: Local Control for Palette-based Image Editing(The Eurographics Association and John Wiley & Sons Ltd., 2023) Chao, Cheng-Kang Ted; Klein, Jason; Tan, Jianchao; Echevarria, Jose; Gingold, Yotam; Ritschel, Tobias; Weidlich, AndreaPalette-based image editing takes advantage of the fact that color palettes are intuitive abstractions of images. They allow users to make global edits to an image by adjusting a small set of colors. Many algorithms have been proposed to compute color palettes and corresponding mixing weights. However, in many cases, especially in complex scenes, a single global palette may not adequately represent all potential objects of interest. Edits made using a single palette cannot be localized to specific semantic regions. We introduce an adaptive solution to the usability problem based on optimizing RGB palette colors to achieve arbitrary image-space constraints and automatically splitting the image into semantic sub-regions with more representative local palettes when the constraints cannot be satisfied. Our algorithm automatically decomposes a given image into a semantic hierarchy of soft segments. Difficult-to-achieve edits become straightforward with our method. Our results show the flexibility, control, and generality of our method.Item Markov Chain Mixture Models for Real-Time Direct Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2023) Dittebrandt, Addis; Schüßler, Vincent; Hanika, Johannes; Herholz, Sebastian; Dachsbacher, Carsten; Ritschel, Tobias; Weidlich, AndreaWe present a novel technique to efficiently render complex direct illumination in real-time. It is based on a spatio-temporal randomized mixture model of von Mises-Fisher (vMF) distributions in screen space. For every pixel we determine the vMF distribution to sample from using a Markov chain process which is targeted to capture important features of the integrand. By this we avoid the storage overhead of finite-component deterministic mixture models, for which, in addition, determining the optimal component count is challenging. We use stochastic multiple importance sampling (SMIS) to be independent of the equilibrium distribution of our Markov chain process, since it cancels out in the estimator. Further, we use the same sample to advance the Markov chain and to construct the SMIS estimator and local Markov chain state permutations avoid the resulting bias due to dependent sampling. As a consequence we require one ray per sample and pixel only. We evaluate our technique using implementations in a research renderer as well as a classic game engine with highly dynamic content. Our results show that it is efficient and quickly readapts to dynamic conditions. We compare to spatio-temporal resampling (ReSTIR), which can suffer from correlation artifacts due to its non-adapting candidate distributions that can deviate strongly from the integrand.While we focus on direct illumination, our approach is more widely applicable and we exemplarily show the rendering of caustics.Item NEnv: Neural Environment Maps for Global Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2023) Rodriguez-Pardo, Carlos; Fabre, Javier; Garces, Elena; Lopez-Moreno, Jorge; Ritschel, Tobias; Weidlich, AndreaEnvironment maps are commonly used to represent and compute far-field illumination in virtual scenes. However, they are expensive to evaluate and sample from, limiting their applicability to real-time rendering. Previous methods have focused on compression through spherical-domain approximations, or on learning priors for natural, day-light illumination. These hinder both accuracy and generality, and do not provide the probability information required for importance-sampling Monte Carlo integration. We propose NEnv, a deep-learning fully-differentiable method, capable of compressing and learning to sample from a single environment map. NEnv is composed of two different neural networks: A normalizing flow, able to map samples from uniform distributions to the probability density of the illumination, also providing their corresponding probabilities; and an implicit neural representation which compresses the environment map into an efficient differentiable function. The computation time of environment samples with NEnv is two orders of magnitude less than with traditional methods. NEnv makes no assumptions regarding the content (i.e. natural illumination), thus achieving higher generality than previous learning-based approaches. We share our implementation and a diverse dataset of trained neural environment maps, which can be easily integrated into existing rendering engines.Item Neural Free-Viewpoint Relighting for Glossy Indirect Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2023) Raghavan, Nithin; Xiao, Yan; Lin, Kai-En; Sun, Tiancheng; Bi, Sai; Xu, Zexiang; Li, Tzu-Mao; Ramamoorthi, Ravi; Ritschel, Tobias; Weidlich, AndreaPrecomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. Allfrequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.Item Practical Acquisition of Shape and Plausible Appearance of Reflective and Translucent Objects(The Eurographics Association and John Wiley & Sons Ltd., 2023) Lin, Arvin; Lin, Yiming; Ghosh, Abhijeet; Ritschel, Tobias; Weidlich, AndreaWe present a practical method for acquisition of shape and plausible appearance of reflective and translucent objects for realistic rendering and relighting applications. Such objects are extremely challenging to scan with existing capture setups, and have previously required complex lightstage hardware emitting continuous illumination. We instead employ a practical capture setup consisting of a set of desktop LCD screens to illuminate such objects with piece-wise continuous illumination for acquisition. We employ phase-shifted sinusoidal illumination for novel estimation of high quality photometric normals and transmission vector along with diffuse-specular separated reflectance/transmission maps for realistic relighting. We further employ neural in-painting to fill gaps in our measurements caused by gaps in screen illumination, and a novel NeuS-based neural rendering that combines these shape and reflectance maps acquired from multiple viewpoints for high-quality 3D surface geometry reconstruction along with plausible realistic rendering of complex light transport in such objects.Item A Practical and Hierarchical Yarn-based Shading Model for Cloth(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Junqiu; Montazeri, Zahra; Aubry, Jean-Marie; Yan, Ling-Qi; Weidlich, Andrea; Ritschel, Tobias; Weidlich, AndreaRealistic cloth rendering is a longstanding challenge in computer graphics due to the intricate geometry and hierarchical structure of cloth: Fibers form plies which in turn are combined into yarns which then are woven or knitted into fabrics. Previous fiber-based models have achieved high-quality close-up rendering, but they suffer from high computational cost, which limits their practicality. In this paper, we propose a novel hierarchical model that analytically aggregates light simulation on the fiber level by building on dual-scattering theory. Based on this, we can perform an efficient simulation of ply and yarn shading. Compared to previous methods, our approach is faster and uses less memory while preserving a similar accuracy. We demonstrate both through comparison with existing fiber-based shading models. Our yarn shading model can be applied to curves or surfaces, making it highly versatile for cloth shading. This duality paired with its simplicity and flexibility makes the model particularly useful for film and games production.Item PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN(The Eurographics Association and John Wiley & Sons Ltd., 2023) Lin, Kai-En; Trevithick, Alex; Cheng, Keli; Sarkis, Michel; Ghafoorian, Mohsen; Bi, Ning; Reitmayr, Gerhard; Ramamoorthi, Ravi; Ritschel, Tobias; Weidlich, AndreaPortrait synthesis creates realistic digital avatars which enable users to interact with others in a compelling way. Recent advances in StyleGAN and its extensions have shown promising results in synthesizing photorealistic and accurate reconstruction of human faces. However, previous methods often focus on frontal face synthesis and most methods are not able to handle large head rotations due to the training data distribution of StyleGAN. In this work, our goal is to take as input a monocular video of a face, and create an editable dynamic portrait able to handle extreme head poses. The user can create novel viewpoints, edit the appearance, and animate the face. Our method utilizes pivotal tuning inversion (PTI) to learn a personalized video prior from a monocular video sequence. Then we can input pose and expression coefficients to MLPs and manipulate the latent vectors to synthesize different viewpoints and expressions of the subject. We also propose novel loss functions to further disentangle pose and expression in the latent space. Our algorithm shows much better performance over previous approaches on monocular video datasets, and it is also capable of running in real-time at 54 FPS on an RTX 3080.Item Ray-aligned Occupancy Map Array for Fast Approximate Ray Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zeng, Zheng; Xu, Zilin; Wang, Lu; Wu, Lifan; Yan, Ling-Qi; Ritschel, Tobias; Weidlich, AndreaWe present a new software ray tracing solution that efficiently computes visibilities in dynamic scenes. We first introduce a novel scene representation: ray-aligned occupancy map array (ROMA) that is generated by rasterizing the dynamic scene once per frame. Our key contribution is a fast and low-divergence tracing method computing visibilities in constant time, without constructing and traversing the traditional intersection acceleration data structures such as BVH. To further improve accuracy and alleviate aliasing, we use a spatiotemporal scheme to stochastically distribute the candidate ray samples. We demonstrate the practicality of our method by integrating it into a modern real-time renderer and showing better performance compared to existing techniques based on distance fields (DFs). Our method is free of the typical artifacts caused by incomplete scene information, and is about 2.5×-10× faster than generating and tracing DFs at the same resolution and equal storage.