Rendering 2020 - DL-only Track
Permanent URI for this collection
Browse
Browsing Rendering 2020 - DL-only Track by Title
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network(The Eurographics Association, 2020) Zhao, Yezi; Wang, Beibei; Xu, Yanning; Zeng, Zheng; Wang, Lu; Holzschuch, Nicolas; Dachsbacher, Carsten and Pharr, MattWe want to recreate spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a single image. Pro- ducing these SVBRDFs from single images will allow designers to incorporate many new materials in their virtual scenes, increasing their realism. A single image contains incomplete information about the SVBRDF, making reconstruction difficult. Existing algorithms can produce high-quality SVBRDFs with single or few input photographs using supervised deep learning. The learning step relies on a huge dataset with both input photographs and the ground truth SVBRDF maps. This is a weakness as ground truth maps are not easy to acquire. For practical use, it is also important to produce large SVBRDF maps. Existing algorithms rely on a separate texture synthesis step to generate these large maps, which leads to the loss of consistency be- tween generated SVBRDF maps. In this paper, we address both issues simultaneously. We present an unsupervised generative adversarial neural network that addresses both SVBRDF capture from a single image and synthesis at the same time. From a low-resolution input image, we generate a large resolution SVBRDF, much larger than the input images. We train a generative adversarial network (GAN) to get SVBRDF maps, which have both a large spatial extent and detailed texels. We employ a two-stream generator that divides the training of maps into two groups (normal and roughness as one, diffuse and specular as the other) to better optimize those four maps. In the end, our method is able to generate high-quality large scale SVBRDF maps from a single input photograph with repetitive structures and provides higher quality rendering results with more details compared to the previous works. Each input for our method requires individual training, which costs about 3 hours.Item Multi-Scale Appearance Modeling of Granular Materials with Continuously Varying Grain Properties(The Eurographics Association, 2020) Zhang, Cheng; Zhao, Shuang; Dachsbacher, Carsten and Pharr, MattMany real-world materials such as sand, snow, salt, and rice are comprised of large collections of grains. Previously, multiscale rendering of granular materials requires precomputing light transport per grain and has difficulty in handling materials with continuously varying grain properties. Further, existing methods usually describe granular materials by explicitly storing individual grains, which becomes hugely data-intensive to describe large objects, or replicating small blocks of grains, which lacks the flexibility to describe materials with grains distributed in nonuniform manners. We introduce a new method to render granular materials with continuously varying grain optical properties efficiently. This is achieved using a novel symbolic and differentiable simulation of light transport during precomputation. Additionally, we introduce a new representation to depict large-scale granular materials with complex grain distributions. After constructing a template tile as preprocessing, we adapt it at render time to generate large quantities of grains with user-specified distributions. We demonstrate the effectiveness of our techniques using a few examples with a variety of grain properties and distributions.Item Real-time Monte Carlo Denoising with the Neural Bilateral Grid(The Eurographics Association, 2020) Meng, Xiaoxu; Zheng, Quan; Varshney, Amitabh; Singh, Gurprit; Zwicker, Matthias; Dachsbacher, Carsten and Pharr, MattReal-time denoising for Monte Carlo rendering remains a critical challenge with regard to the demanding requirements of both high fidelity and low computation time. In this paper, we propose a novel and practical deep learning approach to robustly denoise Monte Carlo images rendered at sampling rates as low as a single sample per pixel (1-spp). This causes severe noise, and previous techniques strongly compromise final quality to maintain real-time denoising speed. We develop an efficient convolutional neural network architecture to learn to denoise noisy inputs in a data-dependent bilateral space. Our neural network learns to generate a guide image for first splatting noisy data into the grid, and then slicing it to read out the denoised data. To seamlessly integrate bilateral grids into our trainable denoising pipeline, we leverage a differentiable bilateral grid, called neural bilateral grid, which enables end-to-end training. In addition, we also show how we can further improve denoising quality using a hierarchy of multi-scale bilateral grids. Our experimental results demonstrate that this approach can robustly denoise 1-spp noisy input images at real-time frame rates (a few milliseconds per frame). At such low sampling rates, our approach outperforms state-of-the-art techniques based on kernel prediction networks both in terms of quality and speed, and it leads to significantly improved quality compared to the state-of-the-art feature regression technique.Item Rendering 2020 DL Track: Frontmatter(The Eurographics Association, 2020) Dachsbacher, Carsten; Pharr, Matt; Dachsbacher, Carsten and Pharr, MattItem Temporal Normal Distribution Functions(The Eurographics Association, 2020) Tessari, Lorenzo; Hanika, Johannes; Dachsbacher, Carsten; Droske, Marc; Dachsbacher, Carsten and Pharr, MattSpecular aliasing can make seemingly simple scenes notoriously hard to render efficiently: small geometric features with high curvature and near specular reflectance result in tiny lighting features which are difficult to resolve at low sample counts per pixel. LEAN and LEADR mapping can be used to convert geometric surface detail to anisotropic surface roughness in a preprocess. In scenes including fluid simulation this problem is particularly apparent with fast moving elements such as spray particles, which are typically represented as participating media in movie rendering. Both approaches, however, are only valid in the far-field regime where the geometric detail is much smaller than a pixel, while the challenge of resolving highlights remains in the meso-scale. Fast motion and the relatively long shutter intervals, commonly used in movie production, lead to strong variation of the surface normals seen under a pixel over time aggravating the problem. Recent specular anti aliasing approaches preintegrate geometric curvature under the pixel footprint for one specific ray to achieve noise free images at low sample counts. We extend these to anisotropic surface roughness and to account for the temporal surface normal variation due to motion blur. We use temporal derivatives to approximate the distribution of the surface normal seen under a pixel over the course of the shutter interval. Furthermore, we discuss how this can afterwards be combined with the surface BSDF in a practical way.Item Temporal Sample Reuse for Next Event Estimation and Path Guiding for Real-Time Path Tracing(The Eurographics Association, 2020) Dittebrandt, Addis; Hanika, Johannes; Dachsbacher, Carsten; Dachsbacher, Carsten and Pharr, MattGood importance sampling is crucial for real-time path tracing where only low sample budgets are possible. We present two efficient sampling techniques tailored for massively-parallel GPU path tracing which improve next event estimation (NEE) for rendering with many light sources and sampling of indirect illumination. As sampling densities need to vary spatially, we use an octree structure in world space and introduce algorithms to continuously adapt the partitioning and distribution of the sampling budget. Both sampling techniques exploit temporal coherence by reusing samples from the previous frame: For NEE we collect sampled, unoccluded light sources and show how to deduplicate, but also diffuse this information to efficiently sample light sources in the subsequent frame. For sampling indirect illumination, we present a compressed directional quadtree structure which is iteratively adapted towards high-energy directions using samples from the previous frame. The updates and rebuilding of all data structures takes about 1ms in our test scenes, and adds about 6ms at 1080p to the path tracing time compared to using state-of-the-art light hierarchies and BRDF sampling. We show that this additional effort reduces noise in terms of mean squared error by at least one order of magnitude in many situations.