Browsing by Author "Guo, Jie"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item DeepBRDF: A Deep Representation for Manipulating Measured BRDF(The Eurographics Association and John Wiley & Sons Ltd., 2020) Hu, Bingyang; Guo, Jie; Chen, Yanjun; Li, Mengtian; Guo, Yanwen; Panozzo, Daniele and Assarsson, UlfEffective compression of densely sampled BRDF measurements is critical for many graphical or vision applications. In this paper, we present DeepBRDF, a deep-learning-based representation that can significantly reduce the dimensionality of measured BRDFs while enjoying high quality of recovery. We consider each measured BRDF as a sequence of image slices and design a deep autoencoder with a masked L2 loss to discover a nonlinear low-dimensional latent space of the high-dimensional input data. Thorough experiments verify that the proposed method clearly outperforms PCA-based strategies in BRDF data compression and is more robust. We demonstrate the effectiveness of DeepBRDF with two applications. For BRDF editing, we can easily create a new BRDF by navigating on the low-dimensional manifold of DeepBRDF, guaranteeing smooth transitions and high physical plausibility. For BRDF recovery, we design another deep neural network to automatically generate the full BRDF data from a single input image. Aided by our DeepBRDF learned from real-world materials, a wide range of reflectance behaviors can be recovered with high accuracy.Item Real-Time Antialiased Area Lighting Using Multi-Scale Linearly Transformed Cosines(The Eurographics Association, 2021) Tao, Chengzhi; Guo, Jie; Gong, Chen; Wang, Beibei; Guo, Yanwen; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, BurkhardWe present an anti-aliased real-time rendering method for local area lights based on Linearly Transformed Cosines (LTCs). It significantly reduces the aliasing artifacts of highlights reflected from area lights due to ignoring the meso-scale roughness (induced by normal maps). The proposed method separates the surface roughness into different scales and represents them all by LTCs. Then, spherical convolution is conducted between them to derive the overall normal distribution and the final Bidirectional Reflectance Distribution Function (BRDF). The overall surface roughness is further approximated by a polynomial function to guarantee high efficiency and avoid additional storage consumption. Experimental results show that our approach produces convincing results of multi-scale roughness across a range of viewing distances for local area lighting.Item Real-time Deep Radiance Reconstruction from Imperfect Caches(The Eurographics Association and John Wiley & Sons Ltd., 2022) Huang, Tao; Song, Yadong; Guo, Jie; Tao, Chengzhi; Zong, Zijing; Fu, Xihao; Li, Hongshan; Guo, Yanwen; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneReal-time global illumination is a highly desirable yet challenging task in computer graphics. Existing works well solving this problem are mostly based on some kind of precomputed data (caches), while the final results depend significantly on the quality of the caches. In this paper, we propose a learning-based pipeline that can reproduce a wide range of complex light transport phenomena, including high-frequency glossy interreflection, at any viewpoint in real time (> 90 frames per-second), using information from imperfect caches stored at the barycentre of every triangle in a 3D scene. These caches are generated at a precomputation stage by a physically-based offline renderer at a low sampling rate (e.g., 32 samples per-pixel) and a low image resolution (e.g., 64×16). At runtime, a deep radiance reconstruction method based on a dedicated neural network is then involved to reconstruct a high-quality radiance map of full global illumination at any viewpoint from these imperfect caches, without introducing noise and aliasing artifacts. To further improve the reconstruction accuracy, a new feature fusion strategy is designed in the network to better exploit useful contents from cheap G-buffers generated at runtime. The proposed framework ensures high-quality rendering of images for moderate-sized scenes with full global illumination effects, at the cost of reasonable precomputation time. We demonstrate the effectiveness and efficiency of the proposed pipeline by comparing it with alternative strategies, including real-time path tracing and precomputed radiance transfer.Item SVBRDF Reconstruction by Transferring Lighting Knowledge(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Pengfei; Lai, Shuichang; Chen, Mufan; Guo, Jie; Liu, Yifan; Guo, Yanwen; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.The problem of reconstructing spatially-varying BRDFs from RGB images has been studied for decades. Researchers found themselves in a dilemma: opting for either higher quality with the inconvenience of camera and light calibration, or greater convenience at the expense of compromised quality without complex setups. We address this challenge by introducing a twobranch network to learn the lighting effects in images. The two branches, referred to as Light-known and Light-aware, diverge in their need for light information. The Light-aware branch is guided by the Light-known branch to acquire the knowledge of discerning light effects and surface reflectance properties, but without the reliance of light positions. Both branches are trained using the synthetic dataset, but during testing on real-world cases without calibration, only the Light-aware branch is activated. To facilitate a more effective utilization of various light conditions, we employ gated recurrent units (GRUs) to fuse the features extracted from different images. The two modules mutually benefit when multiple inputs are provided. We present our reconstructed results on both synthetic and real-world examples, demonstrating high quality while maintaining a lightweight characteristic in comparison to previous methods.Item SVBRDF Recovery from a Single Image with Highlights Using a Pre‐trained Generative Adversarial Network(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Wen, Tao; Wang, Beibei; Zhang, Lei; Guo, Jie; Holzschuch, Nicolas; Hauser, Helwig and Alliez, PierreSpatially varying bi‐directional reflectance distribution functions (SVBRDFs) are crucial for designers to incorporate new materials in virtual scenes, making them look more realistic. Reconstruction of SVBRDFs is a long‐standing problem. Existing methods either rely on an extensive acquisition system or require huge datasets, which are non‐trivial to acquire. We aim to recover SVBRDFs from a single image, without any datasets. A single image contains incomplete information about the SVBRDF, making the reconstruction task highly ill‐posed. It is also difficult to separate between the changes in colour that are caused by the material and those caused by the illumination, without the prior knowledge learned from the dataset. In this paper, we use an unsupervised generative adversarial neural network (GAN) to recover SVBRDFs maps with a single image as input. To better separate the effects due to illumination from the effects due to the material, we add the hypothesis that the material is stationary and introduce a new loss function based on Fourier coefficients to enforce this stationarity. For efficiency, we train the network in two stages: reusing a trained model to initialize the SVBRDFs and fine‐tune it based on the input image. Our method generates high‐quality SVBRDFs maps from a single input photograph, and provides more vivid rendering results compared to the previous work. The two‐stage training boosts runtime performance, making it eight times faster than the previous work.Item Text2Mat: Generating Materials from Text(The Eurographics Association, 2023) He, Zhen; Guo, Jie; Zhang, Yan; Tu, Qinghao; Chen, Mufan; Guo, Yanwen; Wang, Pengyu; Dai, Wei; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Specific materials are often associated with a certain type of objects in the real world. They simulate the way the surface of the object interacting with light and are named after that type of object. We observe that the text labels of materials contain advanced semantic information, which can be used as a guidance to assist the generation of specific materials. Based on that, we propose Text2Mat, a text-guided material generation framework. To meet the demand of material generation based on text descriptions, we construct a large set of PBR materials with specific text labels. Each material contains detailed text descriptions that match the visual appearance of the material. Furthermore, for the sake of controlling the texture and spatial layout of generated materials through text, we introduce texture attribute labels and extra attributes describing regular materials. Using this dataset, we train a specific neural network adapted from Stable Diffusion to achieve text-based material generation. Extensive experiments and rendering effects demonstrate that Text2Mat can generate materials with spatial layout and texture styles highly corresponding to text descriptions.