Decoupled Space and Time Sampling of Motion and Defocus Blur for Unified Rendering of Transparent and Opaque Objects
dc.contributor.author | Widmer, Sven | en_US |
dc.contributor.author | Wodniok, Dominik | en_US |
dc.contributor.author | Thul, Daniel | en_US |
dc.contributor.author | Guthe, Stefan | en_US |
dc.contributor.author | Goesele, Michael | en_US |
dc.contributor.editor | Eitan Grinspun and Bernd Bickel and Yoshinori Dobashi | en_US |
dc.date.accessioned | 2016-10-11T05:20:55Z | |
dc.date.available | 2016-10-11T05:20:55Z | |
dc.date.issued | 2016 | |
dc.description.abstract | We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally-varying fragments (t-fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t-fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv-sampling for depth-of-field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t-fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance. | en_US |
dc.description.number | 7 | |
dc.description.sectionheaders | Realistic Rendering | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 35 | |
dc.identifier.doi | 10.1111/cgf.13041 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 441-450 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.13041 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13041 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | I.3.6 [Computer Graphics] | |
dc.subject | Methodology and Techniques | |
dc.subject | Graphics data structures and data types | |
dc.subject | I.3.7 [Computer Graphics] | |
dc.subject | Three Dimensional Graphics and Realism | |
dc.subject | Raytracing | |
dc.title | Decoupled Space and Time Sampling of Motion and Defocus Blur for Unified Rendering of Transparent and Opaque Objects | en_US |