Browsing by Author "Xu, Zilin"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Ray-aligned Occupancy Map Array for Fast Approximate Ray Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zeng, Zheng; Xu, Zilin; Wang, Lu; Wu, Lifan; Yan, Ling-Qi; Ritschel, Tobias; Weidlich, AndreaWe present a new software ray tracing solution that efficiently computes visibilities in dynamic scenes. We first introduce a novel scene representation: ray-aligned occupancy map array (ROMA) that is generated by rasterizing the dynamic scene once per frame. Our key contribution is a fast and low-divergence tracing method computing visibilities in constant time, without constructing and traversing the traditional intersection acceleration data structures such as BVH. To further improve accuracy and alleviate aliasing, we use a spatiotemporal scheme to stochastically distribute the candidate ray samples. We demonstrate the practicality of our method by integrating it into a modern real-time renderer and showing better performance compared to existing techniques based on distance fields (DFs). Our method is free of the typical artifacts caused by incomplete scene information, and is about 2.5×-10× faster than generating and tracing DFs at the same resolution and equal storage.Item Unsupervised Image Reconstruction for Gradient-Domain Volumetric Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Zilin; Sun, Qiang; Wang, Lu; Xu, Yanning; Wang, Beibei; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueGradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space. These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently, a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed. Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work. Although we only considered volumetric photon mapping, it's straightforward to extend our method for other forms, like beam radiance estimation.