41-Issue 4
Permanent URI for this collection
Browse
Browsing 41-Issue 4 by Subject "Neural networks"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Automatic Feature Selection for Denoising Volumetric Renderings(The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhang, Xianyao; Ott, Melvin; Manzi, Marco; Gross, Markus; Papas, Marios; Ghosh, Abhijeet; Wei, Li-YiWe propose a method for constructing feature sets that significantly improve the quality of neural denoisers for Monte Carlo renderings with volumetric content. Starting from a large set of hand-crafted features, we propose a feature selection process to identify significantly pruned near-optimal subsets. While a naive approach would require training and testing a separate denoiser for every possible feature combination, our selection process requires training of only a single probe denoiser for the selection task. Moreover, our approximate solution has an asymptotic complexity that is quadratic to the number of features compared to the exponential complexity of the naive approach, while also producing near-optimal solutions. We demonstrate the usefulness of our approach on various state-of-the-art denoising methods for volumetric content. We observe improvements in denoising quality when using our automatically selected feature sets over the hand-crafted sets proposed by the original methods.Item Deep Flow Rendering: View Synthesis via Layer-aware Reflection Flow(The Eurographics Association and John Wiley & Sons Ltd., 2022) Dai, Pinxuan; Xie, Ning; Ghosh, Abhijeet; Wei, Li-YiNovel view synthesis (NVS) generates images from unseen viewpoints based on a set of input images. It is a challenge because of inaccurate lighting optimization and geometry inference. Although current neural rendering methods have made significant progress, they still struggle to reconstruct global illumination effects like reflections and exhibit ambiguous blurs in highly viewdependent areas. This work addresses high-quality view synthesis to emphasize reflection on non-concave surfaces. We propose Deep Flow Rendering that optimizes direct and indirect lighting separately, leveraging texture mapping, appearance flow, and neural rendering. A learnable texture is used to predict view-independent features, meanwhile enabling efficient reflection extraction. To accurately fit view-dependent effects, we adopt a constrained neural flow to transfer image-space features from nearby views to the target view in an edge-preserving manner. Then we further implement a fusing renderer that utilizes the predictions of both layers to form the output image. The experiments demonstrate that our method outperforms the state-of-theart methods at synthesizing various scenes with challenging reflection effects.