Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Wang, Lu"

Now showing 1 - 9 of 9
Results Per Page
Sort Options
  • No Thumbnail Available
    Item
    Efficient Caustics Rendering via Spatial and Temporal Path Reuse
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Xu, Xiaofeng; Wang, Lu; Wang, Beibei; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Caustics are complex optical effects caused by the light being concentrated in a small area due to reflection or refraction on surfaces with low roughness, typically under a sharp light source. Rendering caustic effects is challenging for Monte Carlobased approaches, due to the difficulties of sampling the specular paths. One effective solution is using the specular manifold to locate these valid specular paths. Unfortunately, it needs many iterations to find these paths, leading to a long rendering time. To address this issue, our key insight is that the specular paths tend to be similar for neighboring shading points. To this end, we propose to reuse the specular paths spatially. More specifically, we generate some specular path samples with a low sample rate and then reuse these specular path samples as the initialization for specular manifold walk among neighboring shading points. In this way, much fewer specular path-searching iterations are performed, due to the efficient initialization close to the final solution. Furthermore, this reuse strategy can be extended for dynamic scenes in a temporal manner, such as light moving or specular geometry deformation. Our method outperforms current state-of-the-art methods and can handle multiple bounces of light and various scenes.
  • No Thumbnail Available
    Item
    Fast Global Illumination with Discrete Stochastic Microfacets Using a Filterable Model
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Wang, Beibei; Wang, Lu; Holzschuch, Nicolas; Fu, Hongbo and Ghosh, Abhijeet and Kopf, Johannes
    Many real-life materials have a sparkling appearance, whether by design or by nature. Examples include metallic paints, sparkling varnish but also snow. These sparkles correspond to small, isolated, shiny particles reflecting light in a specific direction, on the surface or embedded inside the material. The particles responsible for these sparkles are usually small and discontinuous. These characteristics make it diffcult to integrate them effciently in a standard rendering pipeline, especially for indirect illumination. Existing approaches use a 4-dimensional hierarchy, searching for light-reflecting particles simultaneously in space and direction. The approach is accurate, but still expensive. In this paper, we show that this 4-dimensional search can be approximated using separate 2-dimensional steps. This approximation allows fast integration of glint contributions for large footprints, reducing the extra cost associated with glints be an order of magnitude.
  • Loading...
    Thumbnail Image
    Item
    Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network
    (The Eurographics Association, 2020) Zhao, Yezi; Wang, Beibei; Xu, Yanning; Zeng, Zheng; Wang, Lu; Holzschuch, Nicolas; Dachsbacher, Carsten and Pharr, Matt
    We want to recreate spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a single image. Pro- ducing these SVBRDFs from single images will allow designers to incorporate many new materials in their virtual scenes, increasing their realism. A single image contains incomplete information about the SVBRDF, making reconstruction difficult. Existing algorithms can produce high-quality SVBRDFs with single or few input photographs using supervised deep learning. The learning step relies on a huge dataset with both input photographs and the ground truth SVBRDF maps. This is a weakness as ground truth maps are not easy to acquire. For practical use, it is also important to produce large SVBRDF maps. Existing algorithms rely on a separate texture synthesis step to generate these large maps, which leads to the loss of consistency be- tween generated SVBRDF maps. In this paper, we address both issues simultaneously. We present an unsupervised generative adversarial neural network that addresses both SVBRDF capture from a single image and synthesis at the same time. From a low-resolution input image, we generate a large resolution SVBRDF, much larger than the input images. We train a generative adversarial network (GAN) to get SVBRDF maps, which have both a large spatial extent and detailed texels. We employ a two-stream generator that divides the training of maps into two groups (normal and roughness as one, diffuse and specular as the other) to better optimize those four maps. In the end, our method is able to generate high-quality large scale SVBRDF maps from a single input photograph with repetitive structures and provides higher quality rendering results with more details compared to the previous works. Each input for our method requires individual training, which costs about 3 hours.
  • Loading...
    Thumbnail Image
    Item
    Path‐based Monte Carlo Denoising Using a Three‐Scale Neural Network
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Lin, Weiheng; Wang, Beibei; Yang, Jian; Wang, Lu; Yan, Ling‐Qi; Benes, Bedrich and Hauser, Helwig
    Monte Carlo rendering is widely used in the movie industry. Since it is costly to produce noise‐free results directly, Monte Carlo denoising is often applied as a post‐process. Recently, deep learning methods have been successfully leveraged in Monte Carlo denoising. They are able to produce high quality denoised results, even with very low sample rate, e.g. 4 spp (sample per pixel). However, for difficult scene configurations, some details could be blurred in the denoised results. In this paper, we aim at preserving more details from inputs rendered with low spp. We propose a novel denoising pipeline that handles three‐scale features ‐ pixel, sample and path ‐ to preserve sharp details, uses an improved Res2Net feature extractor to reduce the network parameters and a smooth feature attention mechanism to remove low‐frequency splotches. As a result, our method achieves higher denoising quality and preserves better details than the previous methods.
  • Loading...
    Thumbnail Image
    Item
    Ray-aligned Occupancy Map Array for Fast Approximate Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zeng, Zheng; Xu, Zilin; Wang, Lu; Wu, Lifan; Yan, Ling-Qi; Ritschel, Tobias; Weidlich, Andrea
    We present a new software ray tracing solution that efficiently computes visibilities in dynamic scenes. We first introduce a novel scene representation: ray-aligned occupancy map array (ROMA) that is generated by rasterizing the dynamic scene once per frame. Our key contribution is a fast and low-divergence tracing method computing visibilities in constant time, without constructing and traversing the traditional intersection acceleration data structures such as BVH. To further improve accuracy and alleviate aliasing, we use a spatiotemporal scheme to stochastically distribute the candidate ray samples. We demonstrate the practicality of our method by integrating it into a modern real-time renderer and showing better performance compared to existing techniques based on distance fields (DFs). Our method is free of the typical artifacts caused by incomplete scene information, and is about 2.5×-10× faster than generating and tracing DFs at the same resolution and equal storage.
  • Loading...
    Thumbnail Image
    Item
    Real‐Time Microstructure Rendering with MIP‐Mapped Normal Map Samples
    (© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Tan, Haowen; Zhu, Junqiu; Xu, Yanning; Meng, Xiangxu; Wang, Lu; Yan, Ling‐Qi; Hauser, Helwig and Alliez, Pierre
    Normal map‐based microstructure rendering can generate both glint and scratch appearance accurately. However, the extra high‐resolution normal map that defines every microfacet normal may incur high storage and computation costs. We present an example‐based real‐time rendering method for arbitrary microstructure materials, which significantly reduces the required storage space. Our method takes a small‐size normal map sample as input. We implicitly synthesize a high‐resolution normal map from the normal map sample and construct MIP‐mapped 4D position‐normal Gaussian lobes. Based on the above MIP‐mapped 4D lobes and a LUT (lookup table) data structure for the synthesized high‐resolution normal map, an efficient Gaussian query method is presented to evaluate ‐NDFs (position‐normal distribution functions) for shading. We can render complex scenes with glint and scratch surfaces in real time (30 fps) with a full high‐definition resolution, and the space required for each microstructure material is decreased to 30 MB.
  • Loading...
    Thumbnail Image
    Item
    A Stationary SVBRDF Material Modeling Method Based on Discrete Microsurface
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Zhu, Junqiu; Xu, Yanning; Wang, Lu; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Microfacet theory is commonly used to build reflectance models for surfaces. While traditional microfacet-based models assume that the distribution of a surface's microstructure is continuous, recent studies indicate that some surfaces with tiny, discrete and stochastic facets exhibit glittering visual effects, while some surfaces with structured features exhibit anisotropic specular reflection. Accordingly, this paper proposes an efficient and stationary method of surface material modeling to process both glittery and non-glittery surfaces in a consistent way. Our method comprises two steps: in the preprocessing step, we take a fixed-size sample normal map as input, then organize 4D microfacet trees in position and normal space for arbitrary-sized surfaces; we also cluster microfacets into 4D K-lobes via the adaptive k-means method. In the rendering step, moreover, surface normals can be efficiently evaluated using pre-clustered microfacets. Our method is able to efficiently render any structured, discrete and continuous micro-surfaces using a precisely reconstructed surface NDF. Our method is both faster and uses less memory compared to the state-of-the-art glittery surface modeling works.
  • Loading...
    Thumbnail Image
    Item
    Temporally Reliable Motion Vectors for Real-time Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zeng, Zheng; Liu, Shiqiu; Yang, Jinglei; Wang, Lu; Yan, Ling-Qi; Mitra, Niloy and Viola, Ivan
    Real-time ray tracing (RTRT) is being pervasively applied. The key to RTRT is a reliable denoising scheme that reconstructs clean images from significantly undersampled noisy inputs, usually at 1 sample per pixel as limited by current hardware's computing power. The state of the art reconstruction methods all rely on temporal filtering to find correspondences of current pixels in the previous frame, described using per-pixel screen-space motion vectors. While these approaches are demonstrated powerful, they suffer from a common issue that the temporal information cannot be used when the motion vectors are not valid, i.e. when temporal correspondences are not obviously available or do not exist in theory. We introduce temporally reliable motion vectors that aim at deeper exploration of temporal coherence, especially for the generally-believed difficult applications on shadows, glossy reflections and occlusions, with the key idea to detect and track the cause of each effect. We show that our temporally reliable motion vectors produce significantly better temporal results on a variety of dynamic scenes when compared to the state of the art methods, but with negligible performance overhead.
  • Loading...
    Thumbnail Image
    Item
    Unsupervised Image Reconstruction for Gradient-Domain Volumetric Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Zilin; Sun, Qiang; Wang, Lu; Xu, Yanning; Wang, Beibei; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Gradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space. These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently, a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed. Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work. Although we only considered volumetric photon mapping, it's straightforward to extend our method for other forms, like beam radiance estimation.

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback