High-Performance Graphics 2013
Permanent URI for this collection
Browse
Browsing High-Performance Graphics 2013 by Subject "Color"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Imperfect Voxelized Shadow Volumes(ACM, 2013) Wyman, Chris; Dai, Zeng; Kayvon Fatahalian and Christian TheobaltVoxelized shadow volumes [Wyman 2011] provide a discretized view-dependent representation of shadow volumes, but are limited to point or directional lights. We extend them to allow dynamic volumetric visibility from area light sources using imperfect shadow volumes. We show a coarser visibility sampling suffices for area lights. Combining this coarser resolution with a parallel shadow volume construction enables interactive rendering of dynamic volumetric shadows from area lights in homogeneous single-scattering media, at under 4x the cost of hard volumetric shadows.Item Real-time Local Displacement using Dynamic GPU Memory Management(ACM, 2013) Schäfer, Henry; Keinert, Benjamin; Stamminger, Marc; Kayvon Fatahalian and Christian TheobaltWe propose a novel method for local displacement events in large scenes, such as scratches, footsteps, or sculpting operations. Deformations are stored as displacements for vertices generated by hardware tessellation. Adaptive mesh refinement, application of the displacement and all involved memory management happen completely on the GPU. We show various extensions to our approach, such as on-the-fly normal computation and multi-resolution editing. In typical game scenes we perform local deformations at arbitrary positions in far less than one millisecond. This makes the method particularly suited for games and interactive sculpting applications.Item Screen-Space Far-Field Ambient Obscurance(ACM, 2013) Timonen, Ville; Kayvon Fatahalian and Christian TheobaltAmbient obscurance (AO) is an effective approximation of global illumination, and its screen-space (SSAO) versions that operate on depth buffers only are widely used in real-time applications. We present an SSAO method that allows the obscurance effect to be determined from the entire depth buffer for each pixel. Our contribution is two-fold: Firstly, we build an obscurance estimator that accurately converges to ray traced reference results on the same screenspace geometry. Secondly, we generate an intermediate representation of the depth field which, when sampled, gives local peaks of the geometry from the point of view of the receiver. Only a small number of such samples are required to capture AO effects without undersampling artefacts that plague previous methods. Our method is unaffected by the radius of the AO effect or by the complexity of the falloff function and produces results within a few percent of a ray traced screen-space reference at constant real-time frame rates.