Volume 30 (2011)
Permanent URI for this community
Browse
Browsing Volume 30 (2011) by Issue Date
Now showing 1 - 20 of 236
Results Per Page
Sort Options
Item Illustrative Visualization of a Vortex Breakdown Bubble(The Eurographics Association and Blackwell Publishing Ltd., 2011) Hummel, Mathias; Garth, Christoph; Hamann, Bernd; Hagan, Hans; Joy, Kenneth I.; Eduard Groeller and Holly RushmeierItem An Evaluation of Visualization Techniques to Illustrate Statistical Deformation Models(The Eurographics Association and Blackwell Publishing Ltd., 2011) Caban, Jesus J.; Rheingans, Penny; Yoo, T.; H. Hauser, H. Pfister, and J. J. van WijkAs collections of 2D/3D images continue to grow, interest in effective ways to visualize and explore the statistical morphological properties of a group of images has surged. Recently, deformation models have emerged as simple methods to capture the variability and statistical properties of a collection of images. Such models have proven to be effective in tasks such as image classification, generation, registration, segmentation, and analysis of modes of variation. A crucial element missing from most statistical models has been an effective way to summarize and visualize the statistical morphological properties of a group of images. This paper evaluates different visualization techniques that can be extended and used to illustrate the information captured by such statistical models. First, four illustration techniques are described as methods to summarize the statistical morphological properties as captured by deformation models. Second, results of a user study conducted to compare the effectiveness of each visualization technique are presented. After comparing the performance of 40 subjects, we found that statistical annotation techniques present significant benefits when analyzing the structural properties of a group of images.Item ManyLoDs: Parallel Many-View Level-of-Detail Selection for Real-Time Global Illumination(The Eurographics Association and Blackwell Publishing Ltd., 2011) Holländer, Matthias; Ritschel, Tobias; Eisemann, Elmar; Boubekeur, Tamy; Ravi Ramamoorthi and Erik ReinhardLevel-of-Detail structures are a key component for scalable rendering. Built from raw 3D data, these structures are often defined as Bounding Volume Hierarchies, providing coarse-to-fine adaptive approximations that are well-adapted for many-view rasterization. Here, the total number of pixels in each view is usually low, while the cost of choosing the appropriate LoD for each view is high. This task represents a challenge for existing GPU algorithms. We propose ManyLoDs, a new GPU algorithm to efficiently compute many LoDs from a Bounding Volume Hierarchy in parallel by balancing the workload within and among LoDs. Our approach is not specific to a particular rendering technique, can be used on lazy representations such as polygon soups, and can handle dynamic scenes. We apply our method to various many-view rasterization applications, including Instant Radiosity, Point-Based Global Illumination, and reflection / refraction mapping. For each of these, we achieve real-time performance in complex scenes at high resolutions.Item Report of The Statutory Auditors to the General Meeting of The Members Of Eurographics Association Geneva(The Eurographics Association and Blackwell Publishing Ltd., 2011) Eduard Groeller and Holly RushmeierItem A Visual Analytics Approach for Peak-Preserving Prediction of Large Seasonal Time Series(The Eurographics Association and Blackwell Publishing Ltd., 2011) Hao, M. C.; Janetzko, H.; Mittelstädt, S.; Hill, W.; Dayal, U.; Keim, D. A.; Marwah, M.; Sharma, R. K.; H. Hauser, H. Pfister, and J. J. van WijkTime series prediction methods are used on a daily basis by analysts for making important decisions. Most of these methods use some variant of moving averages to reduce the number of data points before prediction. However, to reach a good prediction in certain applications (e.g., power consumption time series in data centers) it is important to preserve peaks and their patterns. In this paper, we introduce automated peak-preserving smoothing and prediction algorithms, enabling a reliable long term prediction for seasonal data, and combine them with an advanced visual interface: (1) using high resolution cell-based time series to explore seasonal patterns, (2) adding new visual interaction techniques (multi-scaling, slider, and brushing & linking) to incorporate human expert knowledge, and (3) providing both new visual accuracy color indicators for validating the predicted results and certainty bands communicating the uncertainty of the prediction. We have integrated these techniques into a wellfitted solution to support the prediction process, and applied and evaluated the approach to predict both power consumption and server utilization in data centers with 70-80% accuracy.Item Anatomy-Guided Multi-Level Exploration of Blood Flow in Cerebral Aneurysms(The Eurographics Association and Blackwell Publishing Ltd., 2011) Neugebauer, Mathias; Janiga, Gabor; Beuing, Oliver; Skalej, Martin; Preim, Bernhard; H. Hauser, H. Pfister, and J. J. van WijkFor cerebral aneurysms, the ostium, the area of inflow, is an important anatomic landmark, since it separates the pathological vessel deformation from the healthy parent vessel. A better understanding of the inflow characteristics, the flow inside the aneurysm and the overall change of pre- and post-aneurysm flow in the parent vessel provide insights for medical research and the development of new risk-reduced treatment options. We present an approach for a qualitative, visual flow exploration that incorporates the ostium and derived anatomical landmarks. It is divided into three scopes: a global scope for exploration of the in- and outflow, an ostium scope that provides characteristics of the flow profile close to the ostium and a local scope for a detailed exploration of the flow in the parent vessel and the aneurysm. The approach was applied to five representative datasets, including measured and simulated blood flow. Informal interviews with two board-certified radiologists confirmed the usefulness of the provided exploration tools and delivered input for the integration of the ostium-based flow analysis into the overall exploration workflow.Item A Sparse Parametric Mixture Model for BTF Compression, Editing and Rendering(The Eurographics Association and Blackwell Publishing Ltd., 2011) Wu, Hongzhi; Dorsey, Julie; Rushmeier, Holly; M. Chen and O. DeussenBidirectional texture functions (BTFs) represent the appearance of complex materials. Three major shortcomings with BTFs are the bulky storage, the difficulty in editing and the lack of efficient rendering methods. To reduce storage, many compression techniques have been applied to BTFs, but the results are difficult to edit. To facilitate editing, analytical models have been fit, but at the cost of accuracy of representation for many materials. It becomes even more challenging if efficient rendering is also needed. We introduce a high-quality general representation that is, at once, compact, easily editable, and can be efficiently rendered. The representation is computed by adopting the stagewise Lasso algorithm to search for a sparse set of analytical functions, whose weighted sum approximates the input appearance data. We achieve compression rates comparable to a state-of-the-art BTF compression method. We also demonstrate results in BTF editing and rendering.Item Functional Webs for Freeform Architecture(The Eurographics Association and Blackwell Publishing Ltd., 2011) Deng, B.; Pottmann, Helmut; Wallner, Johannes; Mario Botsch and Scott SchaeferRationalization and construction-aware design dominate the issue of realizability of freeform architecture. The former means the decomposition of an intended shape into parts which are sufficiently simple and efficient to manufacture; the latter refers to a design procedure which already incorporates rationalization. Recent contributions to this topic have been concerned mostly with small-scale parts, for instance with planar faces of meshes. The present paper deals with another important aspect, namely long-range parts and supporting structures. It turns out that from the pure geometry viewpoint this means studying families of curves which cover surfaces in certain well-defined ways. Depending on the application one has in mind, different combinatorial arrangements of curves are required. We here restrict ourselves to so-called hexagonal webs which correspond to a triangular or tri-hex decomposition of a surface. The individual curve may have certain special properties, like being planar, being a geodesic, or being part of a circle. Each of these properties is motivated by manufacturability considerations and imposes constraints on the shape of the surface. We investigate the available degrees of freedom, show numerical methods of optimization, and demonstrate the effectivity of our approach and the variability of construction solutions derived from webs by means of actual architectural designs.Item In-situ Sampling of a Large-Scale Particle Simulation for Interactive Visualization and Analysis(The Eurographics Association and Blackwell Publishing Ltd., 2011) Woodring, Jonathan; Ahrens, J.; Figg, J.; Wendelberger, J.; Habib, S.; Heitmann, K.; H. Hauser, H. Pfister, and J. J. van WijkWe describe a simulation-time random sampling of a large-scale particle simulation, the RoadRunner Universe MC3 cosmological simulation, for interactive post-analysis and visualization. Simulation data generation rates will continue to be far greater than storage bandwidth rates by many orders of magnitude. This implies that only a very small fraction of data generated by a simulation can ever be stored and subsequently post-analyzed. The limiting factors in this situation are similar to the problem in many population surveys: there aren't enough human resources to query a large population. To cope with the lack of resources, statistical sampling techniques are used to create a representative data set of a large population. Following this analogy, we propose to store a simulationtime random sampling of the particle data for post-analysis, with level-of-detail organization, to cope with the bottlenecks. A sample is stored directly from the simulation in a level-of-detail format for post-visualization and analysis, which amortizes the cost of post-processing and reduces workflow time. Additionally by sampling during the simulation, we are able to analyze the entire particle population to record full population statistics and quantify sample error.Item Computing 3D Shape Guarding and Star Decomposition(The Eurographics Association and Blackwell Publishing Ltd., 2011) Yu, Wuyi; Li, Xin; Bing-Yu Chen, Jan Kautz, Tong-Yee Lee, and Ming C. LinThis paper proposes an effective framework to compute the visibility guarding and star decomposition of 3D solid shapes. We propose a progressive integer linear programming algorithm to solve the guarding points that can visibility cover the entire shape; we also develop a constrained region growing scheme seeded on these guarding points to get the star decomposition. We demonstrate this guarding/decomposition framework can benefit graphics tasks such as shape interpolation and shape matching/retrievalItem Pre‐computed Gathering of Multi‐Bounce Glossy Reflections(The Eurographics Association and Blackwell Publishing Ltd., 2011) Laurijssen, Jurgen; Wang, Rui; Lagae, Ares; Dutré, Philip; Eduard Groeller and Holly RushmeierRecent work in interactive global illumination addresses diffuse and moderately glossy indirect lighting effects, but high‐frequency effects such as multi‐bounce reflections on highly glossy surfaces are often ignored. Accurately simulating such effects is important to convey the realistic appearance of materials such as chrome and shiny metal. In this paper, we present an efficient method for visualizing multi‐bounce glossy reflections at interactive rates under environment lighting. Our main contribution is a pre‐computation–based method which efficiently gathers subsequent highly glossy reflection passes modelled with a non‐linear transfer function representation based on the von Mises–Fisher distribution. We show that our gathering method is superior to scattered sampling. To exploit the sparsity of the pre‐computed data, we apply perfect spatial hashing. As a result, we are able to visualize multi‐bounce glossy reflections at interactive rates at a low pre‐computation cost.Item Predicted Virtual Soft Shadow Maps with High Quality Filtering(The Eurographics Association and Blackwell Publishing Ltd., 2011) Shen, Li; Guennebaud, Gaël; Yang, Baoguang; Feng, Jieqing; M. Chen and O. DeussenIn this paper we present a novel image based algorithm to render visually plausible anti-aliased soft shadows in a robust and efficient manner. To achieve both high visual quality and high performance, it employs an accurate shadow map filtering method which guarantees smooth penumbrae and high quality anisotropic anti-aliasing of the sharp transitions. Unlike approaches based on pre-filtering approximations, our approach does not suffer from light bleeding or losing contact shadows. Discretization artefacts are avoided by creating virtual shadow maps on the fly according to a novel shadow map resolution prediction model. This model takes into account the screen space frequency of the penumbrae via a perceptual metric which has been directly established from an appropriate user study. Consequently, our algorithm always generates shadow maps with minimal resolutions enabling high performance while guarantying high quality. Thanks to this perceptual model, our algorithm can sometimes be faster at rendering soft shadows than hard shadows. It can render game-like scenes at very high frame rates, and extremely large and complex scenes such as CAD models at interactive rates. In addition, our algorithm is highly scalable, and the quality versus performance trade-off can be easily tweaked.Item A Single Image Representation Model for Efficient Stereoscopic Image Creation(The Eurographics Association and Blackwell Publishing Ltd., 2011) Kim, Younghui; Jung, Hwi-ryong; Choi, Sungwoo; Lee, Jungjin; Noh, Junyong; Bing-Yu Chen, Jan Kautz, Tong-Yee Lee, and Ming C. LinComputer graphics is one of the most efficient ways to create a stereoscopic image. The process of stereoscopic CG generation is, however, still very inefficient compared to that of monoscopic CG generation. Despite that stereo images are very similar to each other, they are rendered and manipulated independently. Additional requirements for disparity control specific to stereo images lead to even greater inefficiency. This paper proposes a method to reduce the inefficiency accompanied in the creation of a stereoscopic image. The system automatically generates an optimized single image representation of the entire visible area from both cameras. The single image can be easily manipulated with conventional techniques, as it is spatially smooth and maintains the original shapes of scene objects. In addition, a stereo image pair can be easily generated with an arbitrary disparity setting. These convenient and efficient features are achieved by the automatic generation of a stereo camera pair, robust occlusion detection with a pair of Z-buffers, an optimization method for spatial smoothness, and stereo image pair generation with a non-linear disparity adjustment. Experiments show that our technique dramatically improves the efficiency of stereoscopic image creation while preserving the quality of the results.Item Energy-scale Aware Feature Extraction for Flow Visualization(The Eurographics Association and Blackwell Publishing Ltd., 2011) Pobitzer, A.; Tutkun, M.; Andreassen, Ø.; Fuchs, R.; Peikert, R.; Hauser, H.; H. Hauser, H. Pfister, and J. J. van WijkIn the visualization of flow simulation data, feature detectors often tend to result in overly rich response, making some sort of filtering or simplification necessary to convey meaningful images. In this paper we present an approach that builds upon a decomposition of the flow field according to dynamical importance of different scales of motion energy. Focusing on the high-energy scales leads to a reduction of the flow field while retaining the underlying physical process. The presented method acknowledges the intrinsic structures of the flow according to its energy and therefore allows to focus on the energetically most interesting aspects of the flow. Our analysis shows that this approach can be used for methods based on both local feature extraction and particle integration and we provide a discussion of the error caused by the approximation. Finally, we illustrate the use of the proposed approach for both a local and a global feature detector and in the context of numerical flow simulations.Item Eurographics 2010 Workshop on 3D Object Retrieval (EG 3DOR’10) in cooperation with ACM SIGGRAPH(The Eurographics Association and Blackwell Publishing Ltd., 2011) Daoudi, Mohamed; Schreck, Tobias; Eduard Groeller and Holly RushmeierItem Progressive Splatting of Continuous Scatterplots and Parallel Coordinates(The Eurographics Association and Blackwell Publishing Ltd., 2011) Heinrich, Julian; Bachthaler, S.; Weiskopf, Daniel; H. Hauser, H. Pfister, and J. J. van WijkContinuous scatterplots and parallel coordinates are used to visualize multivariate data defined on a continuous domain. With the existing techniques, rendering such plots becomes prohibitively slow, especially for large scientific datasets. This paper presents a scalable and progressive rendering algorithm for continuous data plots that allows exploratory analysis of large datasets at interactive framerates. The algorithm employs splatting to produce a series of plots that are combined using alpha blending to achieve a progressively improving image. For each individual frame, splats are obtained by transforming Gaussian density kernels from the 3-D domain of the input dataset to the respective data domain. A closed-form analytic description of the resulting splat footprints is derived to allow pre-computation of splat textures for efficient GPU rendering. The plotting method is versatile because it supports arbitrary reconstruction or interpolation schemes for the input data and the splatting technique is scalable because it chooses splat samples independently from the size of the input dataset. Finally, the effectiveness of the method is compared to existing techniques regarding rendering performance and quality.Item Wavelet Rasterization(The Eurographics Association and Blackwell Publishing Ltd., 2011) Manson, Josiah; Schaefer, Scott; M. Chen and O. DeussenWe present a method for analytically calculating an anti-aliased rasterization of arbitrary polygons or fonts bounded by Bezier curves in 2D as well as oriented triangle meshes in 3D. Our algorithm rasterizes multiple resolutions simultaneously using a hierarchical wavelet representation and is robust to degenerate inputs. We show that using the simplest wavelet, the Haar basis, is equivalent to performing a box-filter to the rasterized image. Because we evaluate wavelet coefficients through line integrals in 2D, we are able to derive analytic solutions for polygons that have Bezier curve boundaries of any order, and we provide solutions for quadratic and cubic curves. In 3D, we compute the wavelet coefficients through analytic surface integrals over triangle meshes and show how to do so in a computationally efficient manner.Item Intelligent GPGPU Classification in Volume Visualization: A framework based on Error-Correcting Output Codes(The Eurographics Association and Blackwell Publishing Ltd., 2011) Escalera, Sergio; Puig, Anna; Amoros, Oscar; Salamó, Maria; Bing-Yu Chen, Jan Kautz, Tong-Yee Lee, and Ming C. LinIn volume visualization, the definition of the regions of interest is inherently an iterative trial-and-error process finding out the best parameters to classify and render the final image. Generally, the user requires a lot of expertise to analyze and edit these parameters through multi-dimensional transfer functions. In this paper, we present a framework of intelligent methods to label on-demand multiple regions of interest. These methods can be split into a two-level GPU-based labelling algorithm that computes in time of rendering a set of labelled structures using the Machine Learning Error-Correcting Output Codes (ECOC) framework. In a pre-processing step, ECOC trains a set of Adaboost binary classifiers from a reduced pre-labelled data set. Then, at the testing stage, each classifier is independently applied on the features of a set of unlabelled samples and combined to perform multi-class labelling. We also propose an alternative representation of these classifiers that allows to highly parallelize the testing stage. To exploit that parallelism we implemented the testing stage in GPU-OpenCL. The empirical results on different data sets for several volume structures shows high computational performance and classification accuracy.Item Interactive Indirect Illumination Using Voxel Cone Tracing(The Eurographics Association and Blackwell Publishing Ltd., 2011) Crassin, Cyril; Neyret, Fabrice; Sainz, Miguel; Green, Simon; Eisemann, Elmar; Bing-Yu Chen, Jan Kautz, Tong-Yee Lee, and Ming C. LinIndirect illumination is an important element for realistic image synthesis, but its computation is expensive and highly dependent on the complexity of the scene and of the BRDF of the involved surfaces. While off-line computation and pre-baking can be acceptable for some cases, many applications (games, simulators, etc.) require real-time or interactive approaches to evaluate indirect illumination. We present a novel algorithm to compute indirect lighting in real-time that avoids costly precomputation steps and is not restricted to low-frequency illumination. It is based on a hierarchical voxel octree representation generated and updated on the fly from a regular scene mesh coupled with an approximate voxel cone tracing that allows for a fast estimation of the visibility and incoming energy. Our approach can manage two light bounces for both Lambertian and glossy materials at interactive framerates (25-70FPS). It exhibits an almost scene-independent performance and can handle complex scenes with dynamic content thanks to an interactive octree-voxelization scheme. In addition, we demonstrate that our voxel cone tracing can be used to efficiently estimate Ambient Occlusion.Item Combinatorial Bidirectional Path-Tracing for Efficient Hybrid CPU/GPU Rendering(The Eurographics Association and Blackwell Publishing Ltd., 2011) Pajot, Anthony; Barthe, Loïc; Paulin, Mathias; Poulin, Pierre; M. Chen and O. DeussenThis paper presents a reformulation of bidirectional path-tracing that adequately divides the algorithm into processes efficiently executed in parallel on both the CPU and the GPU. We thus benefit from high-level optimization techniques such as double buffering, batch processing, and asyncronous execution, as well as from the exploitation of most of the CPU, GPU, and memory bus capabilities. Our approach, while avoiding pure GPU implementation limitations (such as limited complexity of shaders, light or camera models, and processed scene data sets), is more than ten times faster than standard bidirectional path-tracing implementations, leading to performance suitable for production-oriented rendering engines.