Browsing by Author "Liu, Yifan"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Adaptive BRDF-Oriented Multiple Importance Sampling of Many Lights(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liu, Yifan; Xu, Kun; Yan, Ling-Qi; Boubekeur, Tamy and Sen, PradeepMany-light rendering is becoming more common and important as rendering goes into the next level of complexity. However, to calculate the illumination under many lights, state of the art algorithms are still far from efficient, due to the separate consideration of light sampling and BRDF sampling. To deal with the inefficiency of many-light rendering, we present a novel light sampling method named BRDF-oriented light sampling, which selects lights based on importance values estimated using the BRDF's contributions. Our BRDF-oriented light sampling method works naturally with MIS, and allows us to dynamically determine the number of samples allocated for different sampling techniques. With our method, we can achieve a significantly faster convergence to the ground truth results, both perceptually and numerically, as compared to previous many-light rendering algorithms.Item An Improved Geometric Approach for Palette-based Image Decomposition and Recoloring(The Eurographics Association and John Wiley & Sons Ltd., 2019) Wang, Yili; Liu, Yifan; Xu, Kun; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonPalette-based image decomposition has attracted increasing attention in recent years. A specific class of approaches have been proposed basing on the RGB-space geometry, which manage to construct convex hulls whose vertices act as palette colors. However, such palettes do not guarantee to have the representative colors which actually appear in the image, thus making it less intuitive and less predictable when editing palette colors to perform recoloring. Hence, we proposed an improved geometric approach to address this issue. We use a polyhedron, but not necessarily a convex hull, in the RGB space to represent the color palette. We then formulate the task of palette extraction as an optimization problem which could be solved in a few seconds. Our palette has a higher degree of representativeness and maintains a relatively similar level of accuracy compared with previous methods. For layer decomposition, we compute layer opacities via simple mean value coordinates, which could achieve instant feedbacks without precomputations. We have demonstrated our method for image recoloring on a variety of examples. In comparison with state-of-the-art works, our approach is generally more intuitive and efficient with fewer artifacts.Item Learning Style Compatibility Between Objects in a Real-World 3D Asset Database(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liu, Yifan; Tang, Ruolan; Ritchie, Daniel; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonLarge 3D asset databases are critical for designing virtual worlds, and using them effectively requires techniques for efficient querying and navigation. One important form of query is search by style compatibility: given a query object, find others that would be visually compatible if used in the same scene. In this paper, we present a scalable, learning-based approach for solving this problem which is designed for use with real-world 3D asset databases; we conduct experiments on 121 3D asset packages containing around 4000 3D objects from the Unity Asset Store. By leveraging the structure of the object packages, we introduce a technique to synthesize training labels for metric learning that work as well as human labels. These labels can grow exponentially with the number of objects, allowing our approach to scale to large real-world 3D asset databases without the need for expensive human training labels. We use these synthetic training labels in a metric learning model that analyzes the in-engine rendered appearance of an object—-combining geometry, material, and texture-whereas prior work considers only object geometry, or disjoint geometry and texture features. Through an ablation experiment, we find that using this representation yields better results than using renders which lack texture, materiality, or both.Item SVBRDF Reconstruction by Transferring Lighting Knowledge(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Pengfei; Lai, Shuichang; Chen, Mufan; Guo, Jie; Liu, Yifan; Guo, Yanwen; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.The problem of reconstructing spatially-varying BRDFs from RGB images has been studied for decades. Researchers found themselves in a dilemma: opting for either higher quality with the inconvenience of camera and light calibration, or greater convenience at the expense of compromised quality without complex setups. We address this challenge by introducing a twobranch network to learn the lighting effects in images. The two branches, referred to as Light-known and Light-aware, diverge in their need for light information. The Light-aware branch is guided by the Light-known branch to acquire the knowledge of discerning light effects and surface reflectance properties, but without the reliance of light positions. Both branches are trained using the synthetic dataset, but during testing on real-world cases without calibration, only the Light-aware branch is activated. To facilitate a more effective utilization of various light conditions, we employ gated recurrent units (GRUs) to fuse the features extracted from different images. The two modules mutually benefit when multiple inputs are provided. We present our reconstructed results on both synthetic and real-world examples, demonstrating high quality while maintaining a lightweight characteristic in comparison to previous methods.