Browsing by Author "Guo, Yanwen"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item DeepBRDF: A Deep Representation for Manipulating Measured BRDF(The Eurographics Association and John Wiley & Sons Ltd., 2020) Hu, Bingyang; Guo, Jie; Chen, Yanjun; Li, Mengtian; Guo, Yanwen; Panozzo, Daniele and Assarsson, UlfEffective compression of densely sampled BRDF measurements is critical for many graphical or vision applications. In this paper, we present DeepBRDF, a deep-learning-based representation that can significantly reduce the dimensionality of measured BRDFs while enjoying high quality of recovery. We consider each measured BRDF as a sequence of image slices and design a deep autoencoder with a masked L2 loss to discover a nonlinear low-dimensional latent space of the high-dimensional input data. Thorough experiments verify that the proposed method clearly outperforms PCA-based strategies in BRDF data compression and is more robust. We demonstrate the effectiveness of DeepBRDF with two applications. For BRDF editing, we can easily create a new BRDF by navigating on the low-dimensional manifold of DeepBRDF, guaranteeing smooth transitions and high physical plausibility. For BRDF recovery, we design another deep neural network to automatically generate the full BRDF data from a single input image. Aided by our DeepBRDF learned from real-world materials, a wide range of reflectance behaviors can be recovered with high accuracy.Item GlassNet: Label Decoupling‐based Three‐stream Neural Network for Robust Image Glass Detection(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Zheng, Chengyu; Shi, Ding; Yan, Xuefeng; Liang, Dong; Wei, Mingqiang; Yang, Xin; Guo, Yanwen; Xie, Haoran; Hauser, Helwig and Alliez, PierreMost of the existing object detection methods generate poor glass detection results, due to the fact that the transparent glass shares the same appearance with arbitrary objects behind it in an image. Different from traditional deep learning‐based wisdoms that simply use the object boundary as an auxiliary supervision, we exploit label decoupling to decompose the original labelled ground‐truth (GT) map into an interior‐diffusion map and a boundary‐diffusion map. The GT map in collaboration with the two newly generated maps breaks the imbalanced distribution of the object boundary, leading to improved glass detection quality. We have three key contributions to solve the transparent glass detection problem: (1) We propose a three‐stream neural network (call GlassNet for short) to fully absorb beneficial features in the three maps. (2) We design a multi‐scale interactive dilation module to explore a wider range of contextual information. (3) We develop an attention‐based boundary‐aware feature Mosaic module to integrate multi‐modal information. Extensive experiments on the benchmark dataset exhibit clear improvements of our method over SOTAs, in terms of both the overall glass detection accuracy and boundary clearness.Item Real-Time Antialiased Area Lighting Using Multi-Scale Linearly Transformed Cosines(The Eurographics Association, 2021) Tao, Chengzhi; Guo, Jie; Gong, Chen; Wang, Beibei; Guo, Yanwen; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, BurkhardWe present an anti-aliased real-time rendering method for local area lights based on Linearly Transformed Cosines (LTCs). It significantly reduces the aliasing artifacts of highlights reflected from area lights due to ignoring the meso-scale roughness (induced by normal maps). The proposed method separates the surface roughness into different scales and represents them all by LTCs. Then, spherical convolution is conducted between them to derive the overall normal distribution and the final Bidirectional Reflectance Distribution Function (BRDF). The overall surface roughness is further approximated by a polynomial function to guarantee high efficiency and avoid additional storage consumption. Experimental results show that our approach produces convincing results of multi-scale roughness across a range of viewing distances for local area lighting.Item Real-time Deep Radiance Reconstruction from Imperfect Caches(The Eurographics Association and John Wiley & Sons Ltd., 2022) Huang, Tao; Song, Yadong; Guo, Jie; Tao, Chengzhi; Zong, Zijing; Fu, Xihao; Li, Hongshan; Guo, Yanwen; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneReal-time global illumination is a highly desirable yet challenging task in computer graphics. Existing works well solving this problem are mostly based on some kind of precomputed data (caches), while the final results depend significantly on the quality of the caches. In this paper, we propose a learning-based pipeline that can reproduce a wide range of complex light transport phenomena, including high-frequency glossy interreflection, at any viewpoint in real time (> 90 frames per-second), using information from imperfect caches stored at the barycentre of every triangle in a 3D scene. These caches are generated at a precomputation stage by a physically-based offline renderer at a low sampling rate (e.g., 32 samples per-pixel) and a low image resolution (e.g., 64×16). At runtime, a deep radiance reconstruction method based on a dedicated neural network is then involved to reconstruct a high-quality radiance map of full global illumination at any viewpoint from these imperfect caches, without introducing noise and aliasing artifacts. To further improve the reconstruction accuracy, a new feature fusion strategy is designed in the network to better exploit useful contents from cheap G-buffers generated at runtime. The proposed framework ensures high-quality rendering of images for moderate-sized scenes with full global illumination effects, at the cost of reasonable precomputation time. We demonstrate the effectiveness and efficiency of the proposed pipeline by comparing it with alternative strategies, including real-time path tracing and precomputed radiance transfer.Item SVBRDF Reconstruction by Transferring Lighting Knowledge(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Pengfei; Lai, Shuichang; Chen, Mufan; Guo, Jie; Liu, Yifan; Guo, Yanwen; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.The problem of reconstructing spatially-varying BRDFs from RGB images has been studied for decades. Researchers found themselves in a dilemma: opting for either higher quality with the inconvenience of camera and light calibration, or greater convenience at the expense of compromised quality without complex setups. We address this challenge by introducing a twobranch network to learn the lighting effects in images. The two branches, referred to as Light-known and Light-aware, diverge in their need for light information. The Light-aware branch is guided by the Light-known branch to acquire the knowledge of discerning light effects and surface reflectance properties, but without the reliance of light positions. Both branches are trained using the synthetic dataset, but during testing on real-world cases without calibration, only the Light-aware branch is activated. To facilitate a more effective utilization of various light conditions, we employ gated recurrent units (GRUs) to fuse the features extracted from different images. The two modules mutually benefit when multiple inputs are provided. We present our reconstructed results on both synthetic and real-world examples, demonstrating high quality while maintaining a lightweight characteristic in comparison to previous methods.Item Text2Mat: Generating Materials from Text(The Eurographics Association, 2023) He, Zhen; Guo, Jie; Zhang, Yan; Tu, Qinghao; Chen, Mufan; Guo, Yanwen; Wang, Pengyu; Dai, Wei; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Specific materials are often associated with a certain type of objects in the real world. They simulate the way the surface of the object interacting with light and are named after that type of object. We observe that the text labels of materials contain advanced semantic information, which can be used as a guidance to assist the generation of specific materials. Based on that, we propose Text2Mat, a text-guided material generation framework. To meet the demand of material generation based on text descriptions, we construct a large set of PBR materials with specific text labels. Each material contains detailed text descriptions that match the visual appearance of the material. Furthermore, for the sake of controlling the texture and spatial layout of generated materials through text, we introduce texture attribute labels and extra attributes describing regular materials. Using this dataset, we train a specific neural network adapted from Stable Diffusion to achieve text-based material generation. Extensive experiments and rendering effects demonstrate that Text2Mat can generate materials with spatial layout and texture styles highly corresponding to text descriptions.Item UTOPIC: Uncertainty-aware Overlap Prediction Network for Partial Point Cloud Registration(The Eurographics Association and John Wiley & Sons Ltd., 2022) Chen, Zhilei; Chen, Honghua; Gong, Lina; Yan, Xuefeng; Wang, Jun; Guo, Yanwen; Qin, Jing; Wei, Mingqiang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneHigh-confidence overlap prediction and accurate correspondences are critical for cutting-edge models to align paired point clouds in a partial-to-partial manner. However, there inherently exists uncertainty between the overlapping and non-overlapping regions, which has always been neglected and significantly affects the registration performance. Beyond the current wisdom, we propose a novel uncertainty-aware overlap prediction network, dubbed UTOPIC, to tackle the ambiguous overlap prediction problem; to our knowledge, this is the first to explicitly introduce overlap uncertainty to point cloud registration. Moreover, we induce the feature extractor to implicitly perceive the shape knowledge through a completion decoder, and present a geometric relation embedding for Transformer to obtain transformation-invariant geometry-aware feature representations.With the merits of more reliable overlap scores and more precise dense correspondences, UTOPIC can achieve stable and accurate registration results, even for the inputs with limited overlapping areas. Extensive quantitative and qualitative experiments on synthetic and real benchmarks demonstrate the superiority of our approach over state-of-the-art methods.