A Directional Occlusion Shading Model for Interactive Direct Volume Rendering

dc.contributor.authorSchott, Mathiasen_US
dc.contributor.authorPegoraro, Vincenten_US
dc.contributor.authorHansen, Charlesen_US
dc.contributor.authorBoulanger, Kévinen_US
dc.contributor.authorBouatouch, Kadien_US
dc.contributor.editorH.-C. Hege, I. Hotz, and T. Munzneren_US
dc.date.accessioned2014-02-21T19:50:48Z
dc.date.available2014-02-21T19:50:48Z
dc.date.issued2009en_US
dc.description.abstractVolumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading.en_US
dc.description.number3en_US
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.volume28en_US
dc.identifier.doi10.1111/j.1467-8659.2009.01464.xen_US
dc.identifier.issn1467-8659en_US
dc.identifier.urihttps://doi.org/10.1111/j.1467-8659.2009.01464.xen_US
dc.publisherThe Eurographics Association and Blackwell Publishing Ltd.en_US
dc.titleA Directional Occlusion Shading Model for Interactive Direct Volume Renderingen_US
Files