Browsing by Author "Xu, Yanning"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network(The Eurographics Association, 2020) Zhao, Yezi; Wang, Beibei; Xu, Yanning; Zeng, Zheng; Wang, Lu; Holzschuch, Nicolas; Dachsbacher, Carsten and Pharr, MattWe want to recreate spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a single image. Pro- ducing these SVBRDFs from single images will allow designers to incorporate many new materials in their virtual scenes, increasing their realism. A single image contains incomplete information about the SVBRDF, making reconstruction difficult. Existing algorithms can produce high-quality SVBRDFs with single or few input photographs using supervised deep learning. The learning step relies on a huge dataset with both input photographs and the ground truth SVBRDF maps. This is a weakness as ground truth maps are not easy to acquire. For practical use, it is also important to produce large SVBRDF maps. Existing algorithms rely on a separate texture synthesis step to generate these large maps, which leads to the loss of consistency be- tween generated SVBRDF maps. In this paper, we address both issues simultaneously. We present an unsupervised generative adversarial neural network that addresses both SVBRDF capture from a single image and synthesis at the same time. From a low-resolution input image, we generate a large resolution SVBRDF, much larger than the input images. We train a generative adversarial network (GAN) to get SVBRDF maps, which have both a large spatial extent and detailed texels. We employ a two-stream generator that divides the training of maps into two groups (normal and roughness as one, diffuse and specular as the other) to better optimize those four maps. In the end, our method is able to generate high-quality large scale SVBRDF maps from a single input photograph with repetitive structures and provides higher quality rendering results with more details compared to the previous works. Each input for our method requires individual training, which costs about 3 hours.Item Real‐Time Microstructure Rendering with MIP‐Mapped Normal Map Samples(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Tan, Haowen; Zhu, Junqiu; Xu, Yanning; Meng, Xiangxu; Wang, Lu; Yan, Ling‐Qi; Hauser, Helwig and Alliez, PierreNormal map‐based microstructure rendering can generate both glint and scratch appearance accurately. However, the extra high‐resolution normal map that defines every microfacet normal may incur high storage and computation costs. We present an example‐based real‐time rendering method for arbitrary microstructure materials, which significantly reduces the required storage space. Our method takes a small‐size normal map sample as input. We implicitly synthesize a high‐resolution normal map from the normal map sample and construct MIP‐mapped 4D position‐normal Gaussian lobes. Based on the above MIP‐mapped 4D lobes and a LUT (lookup table) data structure for the synthesized high‐resolution normal map, an efficient Gaussian query method is presented to evaluate ‐NDFs (position‐normal distribution functions) for shading. We can render complex scenes with glint and scratch surfaces in real time (30 fps) with a full high‐definition resolution, and the space required for each microstructure material is decreased to 30 MB.Item A Stationary SVBRDF Material Modeling Method Based on Discrete Microsurface(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zhu, Junqiu; Xu, Yanning; Wang, Lu; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonMicrofacet theory is commonly used to build reflectance models for surfaces. While traditional microfacet-based models assume that the distribution of a surface's microstructure is continuous, recent studies indicate that some surfaces with tiny, discrete and stochastic facets exhibit glittering visual effects, while some surfaces with structured features exhibit anisotropic specular reflection. Accordingly, this paper proposes an efficient and stationary method of surface material modeling to process both glittery and non-glittery surfaces in a consistent way. Our method comprises two steps: in the preprocessing step, we take a fixed-size sample normal map as input, then organize 4D microfacet trees in position and normal space for arbitrary-sized surfaces; we also cluster microfacets into 4D K-lobes via the adaptive k-means method. In the rendering step, moreover, surface normals can be efficiently evaluated using pre-clustered microfacets. Our method is able to efficiently render any structured, discrete and continuous micro-surfaces using a precisely reconstructed surface NDF. Our method is both faster and uses less memory compared to the state-of-the-art glittery surface modeling works.Item Unsupervised Image Reconstruction for Gradient-Domain Volumetric Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Zilin; Sun, Qiang; Wang, Lu; Xu, Yanning; Wang, Beibei; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueGradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space. These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently, a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed. Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work. Although we only considered volumetric photon mapping, it's straightforward to extend our method for other forms, like beam radiance estimation.