Volume 38 (2019)
Permanent URI for this community
Browse
Browsing Volume 38 (2019) by Title
Now showing 1 - 20 of 267
Results Per Page
Sort Options
Item Accurate Synthesis of Multi-Class Disk Distributions(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ecormier-Nocca, Pierre; Memari, Pooran; Gain, James; Cani, Marie-Paule; Alliez, Pierre and Pellacini, FabioWhile analysing and synthesising 2D distributions of points has been applied both to the generation of textures with discrete elements and for populating virtual worlds with 3D objects, the results are often inaccurate since the spatial extent of objects cannot be expressed.We introduce three improvements enabling the synthesis of more general distributions of elements. First, we extend continuous pair correlation function (PCF) algorithms to multi-class distributions using a dependency graph, thereby capturing interrelationships between distinct categories of objects. Second, we introduce a new normalised metric for disks, which makes the method applicable to both point and possibly overlapping disk distributions. The metric is specifically designed to distinguish perceptually salient features, such as disjoint, tangent, overlapping, or nested disks. Finally, we pay particular attention to convergence of the mean PCF as well as the validity of individual PCFs, by taking into consideration the variance of the input. Our results demonstrate that this framework can capture and reproduce real-life distributions of elements representing a variety of complex semi-structured patterns, from the interaction between trees and the understorey in a forest to droplets of water. More generally, it applies to any category of 2D object whose shape is better represented by bounding circles than points.Item Active Scene Understanding via Online Semantic Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zheng, Lintao; Zhu, Chenyang; Zhang, Jiazhao; Zhao, Hang; Huang, Hui; Niessner, Matthias; Xu, Kai; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe propose a novel approach to robot-operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real-time voxel-based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning.Item Adaptive BRDF-Oriented Multiple Importance Sampling of Many Lights(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liu, Yifan; Xu, Kun; Yan, Ling-Qi; Boubekeur, Tamy and Sen, PradeepMany-light rendering is becoming more common and important as rendering goes into the next level of complexity. However, to calculate the illumination under many lights, state of the art algorithms are still far from efficient, due to the separate consideration of light sampling and BRDF sampling. To deal with the inefficiency of many-light rendering, we present a novel light sampling method named BRDF-oriented light sampling, which selects lights based on importance values estimated using the BRDF's contributions. Our BRDF-oriented light sampling method works naturally with MIS, and allows us to dynamically determine the number of samples allocated for different sampling techniques. With our method, we can achieve a significantly faster convergence to the ground truth results, both perceptually and numerically, as compared to previous many-light rendering algorithms.Item An Adaptive Multi‐Grid Solver for Applications in Computer Graphics(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Kazhdan, Misha; Hoppe, Hugues; Chen, Min and Benes, BedrichA key processing step in numerous computer graphics applications is the solution of a linear system discretized over a spatial domain. Often, the linear system can be represented using an adaptive domain tessellation, either because the solution will only be sampled sparsely, or because the solution is known to be ‘interesting’ (e.g. high frequency) only in localized regions. In this work, we propose an adaptive, finite elements, multi‐grid solver capable of efficiently solving such linear systems. Our solver is designed to be general‐purpose, supporting finite elements of different degrees, across different dimensions and supporting both integrated and pointwise constraints. We demonstrate the efficacy of our solver in applications including surface reconstruction, image stitching and Euclidean Distance Transform calculation.A key processing step in numerous computer graphics applications is the solution of a linear system discretized over a spatial domain. Often, the linear system can be represented using an adaptive domain tessellation, either because the solution will only be sampled sparsely, or because the solution is known to be ‘interesting’ (e.g. high frequency) only in localized regions. In this work, we propose an adaptive, finite elements, multi‐grid solver capable of efficiently solving such linear systems. Our solver is designed to be general‐purpose, supporting finite elements of different degrees, across different dimensions and supporting both integrated and pointwise constraints.Item Adaptive Temporal Sampling for Volumetric Path Tracing of Medical Data(The Eurographics Association and John Wiley & Sons Ltd., 2019) Martschinke, Jana; Hartnagel, Stefan; Keinert, Benjamin; Engel, Klaus; Stamminger, Marc; Boubekeur, Tamy and Sen, PradeepMonte-Carlo path tracing techniques can generate stunning visualizations of medical volumetric data. In a clinical context, such renderings turned out to be valuable for communication, education, and diagnosis. Because a large number of computationally expensive lighting samples is required to converge to a smooth result, progressive rendering is the only option for interactive settings: Low-sampled, noisy images are shown while the user explores the data, and as soon as the camera is at rest the view is progressively refined. During interaction, the visual quality is low, which strongly impedes the user's experience. Even worse, when a data set is explored in virtual reality, the camera is never at rest, leading to constantly low image quality and strong flickering. In this work we present an approach to bring volumetric Monte-Carlo path tracing to the interactive domain by reusing samples over time. To this end, we transfer the idea of temporal antialiasing from surface rendering to volume rendering. We show how to reproject volumetric ray samples even though they cannot be pinned to a particular 3D position, present an improved weighting scheme that makes longer history trails possible, and define an error accumulation method that downweights less appropriate older samples. Furthermore, we exploit reprojection information to adaptively determine the number of newly generated path tracing samples for each individual pixel. Our approach is designed for static, medical data with both volumetric and surface-like structures. It achieves good-quality volumetric Monte-Carlo renderings with only little noise, and is also usable in a VR context.Item Analysis of Decadal Climate Predictions with User-guided Hierarchical Ensemble Clustering(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kappe, Christopher; Böttinger, Michael; Leitte, Heike; Gleicher, Michael and Viola, Ivan and Leitte, HeikeIn order to gain probabilistic results, ensemble simulation techniques are increasingly applied in the weather and climate sciences (as well as in various other scientific disciplines). In many cases, however, only mean results or other abstracted quantities such as percentiles are used for further analyses and dissemination of the data. In this work, we aim at a more detailed visualization of the temporal development of the whole ensemble that takes the variability of all single members into account. We propose a visual analytics tool that allows an effective analysis process based on a hierarchical clustering of the time-dependent scalar fields. The system includes a flow chart that shows the ensemble members' cluster affiliation over time, reflecting the whole cluster hierarchy. The latter one can be dynamically explored using a visualization derived from a dendrogram. As an aid in linking the different views, we have developed an adaptive coloring scheme that takes into account cluster similarity and the containment relationships. Finally, standard visualizations of the involved field data (cluster means, ground truth data, etc.) are also incorporated. We include results of our work on real-world datasets to showcase the utility of our approach.Item Analysis of Long Molecular Dynamics Simulations Using Interactive Focus+Context Visualization(The Eurographics Association and John Wiley & Sons Ltd., 2019) Byška, Jan; Trautner, Thomas; Marques, Sérgio M.; Damborský, Jiří; Kozlíková, Barbora; Waldner, Manuela; Gleicher, Michael and Viola, Ivan and Leitte, HeikeAnalyzing molecular dynamics (MD) simulations is a key aspect to understand protein dynamics and function. With increasing computational power, it is now possible to generate very long and complex simulations, which are cumbersome to explore using traditional 3D animations of protein movements. Guided by requirements derived from multiple focus groups with protein engineering experts, we designed and developed a novel interactive visual analysis approach for long and crowded MD simulations. In this approach, we link a dynamic 3D focus+context visualization with a 2D chart of time series data to guide the detection and navigation towards important spatio-temporal events. The 3D visualization renders elements of interest in more detail and increases the temporal resolution dependent on the time series data or the spatial region of interest. In case studies with different MD simulation data sets and research questions, we found that the proposed visual analysis approach facilitates exploratory analysis to generate, confirm, or reject hypotheses about causalities. Finally, we derived design guidelines for interactive visual analysis of complex MD simulation data.Item An Analysis of Region Clustered BVH Volume Rendering on GPU(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ganter, David; Manzke, Michael; Steinberger, Markus and Foley, TimWe present a Direct Volume Rendering method that makes use of newly available Nvidia graphics hardware for Bounding Volume Hierarchies. Using BVHs for DVR has been overlooked in recent research due to build times potentially impeding interactive rates. We indicate that this is not necessarily the case, especially when a clustering algorithm is applied before the BVH build to reduce leaf-node complexity. Our results show substantial render time improvements for full-resolution DVR on GPU in comparison to a recent state-of-the-art approach for empty-space-skipping. Furthermore, the use of a BVH for DVR allows seamless integration into popular surface-based path-tracing technologies like Nvidia's OptiX.Item Analytic Spectral Integration of Birefringence-Induced Iridescence(The Eurographics Association and John Wiley & Sons Ltd., 2019) Steinberg, Shlomi; Boubekeur, Tamy and Sen, PradeepOptical phenomena that are only observable in optically anisotropic materials are generally ignored in the computer graphics. However, such optical effects are not restricted to exotic materials and can also be observed with common translucent objects when optical anisotropy is induced, e.g. via mechanical stress. Furthermore accurate prediction and reproduction of those optical effects has important practical applications. We provide a short but complete analysis of the relevant electromagnetic theory of light propagation in optically anisotropic media and derive the full set of formulations required to render birefringent materials. We then present a novel method for spectral integration of refraction and reflection in an anisotropic slab. Our approach allows fast and robust rendering of birefringence-induced iridescence in a physically faithful manner and is applicable to both real-time and offline rendering.Item Anisotropic Surface Remeshing without Obtuse Angles(The Eurographics Association and John Wiley & Sons Ltd., 2019) Xu, Qun-Ce; Yan, Dong-Ming; Li, Wenbin; Yang, Yong-Liang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a novel anisotropic surface remeshing method that can efficiently eliminate obtuse angles. Unlike previous work that can only suppress obtuse angles with expensive resampling and Lloyd-type iterations, our method relies on a simple yet efficient connectivity and geometry refinement, which can not only remove all the obtuse angles, but also preserves the original mesh connectivity as much as possible. Our method can be directly used as a post-processing step for anisotropic meshes generated from existing algorithms to improve mesh quality. We evaluate our method by testing on a variety of meshes with different geometry and topology, and comparing with representative prior work. The results demonstrate the effectiveness and efficiency of our approach.Item Appearance Flow Completion for Novel View Synthesis(The Eurographics Association and John Wiley & Sons Ltd., 2019) Le, Hoang; Liu, Feng; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonNovel view synthesis from sparse and unstructured input views faces challenges like the difficulty with dense 3D reconstruction and large occlusion. This paper addresses these problems by estimating proper appearance flows from the target to input views to warp and blend the input views. Our method first estimates a sparse set 3D scene points using an off-the-shelf 3D reconstruction method and calculates sparse flows from the target to input views. Our method then performs appearance flow completion to estimate the dense flows from the corresponding sparse ones. Specifically, we design a deep fully convolutional neural network that takes sparse flows and input views as input and outputs the dense flows. Furthermore, we estimate the optical flows between input views as references to guide the estimation of dense flows between the target view and input views. Besides the dense flows, our network also estimates the masks to blend multiple warped inputs to render the target view. Experiments on the KITTI benchmark show that our method can generate high quality novel views from sparse and unstructured input views.Item Appearance Modelling of Living Human Tissues(© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Nunes, Augusto L.P.; Maciel, Anderson; Meyer, Gary W.; John, Nigel W.; Baranoski, Gladimir V.G.; Walter, Marcelo; Chen, Min and Benes, BedrichThe visual fidelity of realistic renderings in Computer Graphics depends fundamentally upon how we model the appearance of objects resulting from the interaction between light and matter reaching the eye. In this paper, we survey the research addressing appearance modelling of living human tissue. Among the many classes of natural materials already researched in Computer Graphics, living human tissues such as blood and skin have recently seen an increase in attention from graphics research. There is already an incipient but substantial body of literature on this topic, but we also lack a structured review as presented here. We introduce a classification for the approaches using the four types of human tissues as classifiers. We show a growing trend of solutions that use first principles from Physics and Biology as fundamental knowledge upon which the models are built. The organic quality of visual results provided by these approaches is mainly determined by the optical properties of biophysical components interacting with light. Beyond just picture making, these models can be used in predictive simulations, with the potential for impact in many other areas.The visual fidelity of realistic renderings in Computer Graphics depends fundamentally upon how we model the appearance of objects resulting from the inter action between light and matter reaching the eye. In this paper, we survey the research addressing appearance modelling of living human tissue. Among the many classes of natural materials already researched in Computer Graphics, living human tissues such as blood and skin have recently seen an increase in attention from graphics research. There is already an incipient but substantial body of literature on this topic, but we also lack a structured review as presented here. We introduce a classification for the approaches using the four types of human tissues as classifiers. We show a growing trend of solutions that use first principles from Physics and Biology as fundamental knowledge upon which the models are built.Item Applying Visual Analytics to Physically Based Rendering(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Simons, G.; Herholz, S.; Petitjean, V.; Rapp, T.; Ament, M.; Lensch, H.; Dachsbacher, C.; Eisemann, M.; Eisemann, E.; Chen, Min and Benes, BedrichPhysically based rendering is a well‐understood technique to produce realistic‐looking images. However, different algorithms exist for efficiency reasons, which work well in certain cases but fail or produce rendering artefacts in others. Few tools allow a user to gain insight into the algorithmic processes. In this work, we present such a tool, which combines techniques from information visualization and visual analytics with physically based rendering. It consists of an interactive parallel coordinates plot, with a built‐in sampling‐based data reduction technique to visualize the attributes associated with each light sample. Two‐dimensional (2D) and three‐dimensional (3D) heat maps depict any desired property of the rendering process. An interactively rendered 3D view of the scene displays animated light paths based on the user's selection to gain further insight into the rendering process. The provided interactivity enables the user to guide the rendering process for more efficiency. To show its usefulness, we present several applications based on our tool. This includes differential light transport visualization to optimize light setup in a scene, finding the causes of and resolving rendering artefacts, such as fireflies, as well as a path length contribution histogram to evaluate the efficiency of different Monte Carlo estimators.Few tools allow a user to gain insight into the algorithmic processes of physically‐based rendering. In this work, we present such a tool, which combines techniques from information visualization and visual analytics with physically based rendering.Item Augmenting Tactile 3D Data Navigation With Pressure Sensing(The Eurographics Association and John Wiley & Sons Ltd., 2019) Wang, Xiyao; Besançon, Lonni; Ammi, Mehdi; Isenberg, Tobias; Gleicher, Michael and Viola, Ivan and Leitte, HeikeWe present a pressure-augmented tactile 3D data navigation technique, specifically designed for small devices, motivated by the need to support the interactive visualization beyond traditional workstations. While touch input has been studied extensively on large screens, current techniques do not scale to small and portable devices. We use phone-based pressure sensing with a binary mapping to separate interaction degrees of freedom (DOF) and thus allow users to easily select different manipulation schemes (e. g., users first perform only rotation and then with a simple pressure input to switch to translation). We compare our technique to traditional 3D-RST (rotation, scaling, translation) using a docking task in a controlled experiment. The results show that our technique increases the accuracy of interaction, with limited impact on speed. We discuss the implications for 3D interaction design and verify that our results extend to older devices with pseudo pressure and are valid in realistic phone usage scenarios.Item Automatic Generation of Vivid LEGO Architectural Sculptures(© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Zhou, J.; Chen, X.; Xu, Y.; Chen, Min and Benes, BedrichBrick elements are very popular and have been widely used in many areas, such as toy design and architectural fields. Designing a vivid brick sculpture to represent a three‐dimensional (3D) model is a very challenging task, which requires professional skills and experience to convey unique visual characteristics. We introduce an automatic system to convert an architectural model into a LEGO sculpture while preserving the original model's shape features. Unlike previous legolization techniques that generate a LEGO sculpture exactly based on the input model's voxel representation, we extract the model's visual features, including repeating components, shape details and planarity. Then, we translate these visual features into the final LEGO sculpture by employing various brick types. We propose a deformation algorithm in order to resolve discrepancies between an input mesh's continuous 3D shape and the discrete positions of bricks in a LEGO sculpture. We evaluate our system on various architectural models and compare our method with previous voxelization‐based methods. The results demonstrate that our approach successfully conveys important visual features from digital models and generates vivid LEGO sculptures.Brick elements are very popular and have been widely used in many areas, such as toy design and architectural fields. Designing a vivid brick sculpture to represent a three‐dimensional (3D) model is a very challenging task, which requires professional skills and experience to convey unique visual characteristics. We introduce an automatic system to convert an architectural model (a) into a LEGO sculpture (b) while preserving the original model's shape features. Real LEGO sculptures (c) can then be built according to the automatically generated results. The results demonstrate that our approach successfully conveys important visual features from digital models and generates vivid LEGO sculptures.Item Automatic Modeling of Cluttered Multi-room Floor Plans From Panoramic Images(The Eurographics Association and John Wiley & Sons Ltd., 2019) Pintore, Giovanni; Ganovelli, Fabio; Villanueva, Alberto Jaspe; Gobbetti, Enrico; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a novel and light-weight approach to capture and reconstruct structured 3D models of multi-room floor plans. Starting from a small set of registered panoramic images, we automatically generate a 3D layout of the rooms and of all the main objects inside. Such a 3D layout is directly suitable for use in a number of real-world applications, such as guidance, location, routing, or content creation for security and energy management. Our novel pipeline introduces several contributions to indoor reconstruction from purely visual data. In particular, we automatically partition panoramic images in a connectivity graph, according to the visual layout of the rooms, and exploit this graph to support object recovery and rooms boundaries extraction. Moreover, we introduce a plane-sweeping approach to jointly reason about the content of multiple images and solve the problem of object inference in a top-down 2D domain. Finally, we combine these methods in a fully automated pipeline for creating a structured 3D model of a multi-room floor plan and of the location and extent of clutter objects. These contribution make our pipeline able to handle cluttered scenes with complex geometry that are challenging to existing techniques. The effectiveness and performance of our approach is evaluated on both real-world and synthetic models.Item Autonomous Particles for Interactive Flow Visualization(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Engelke, Wito; Lawonn, Kai; Preim, Bernhard; Hotz, Ingrid; Chen, Min and Benes, BedrichWe present an interactive approach to analyse flow fields using a new type of particle system, which is composed of autonomous particles exploring the flow. While particles provide a very intuitive way to visualize flows, it is a challenge to capture the important features with such systems. Particles tend to cluster in regions of low velocity and regions of interest are often sparsely populated. To overcome these disadvantages, we propose an automatic adaption of the particle density with respect to local importance measures. These measures are user defined and the systems sensitivity to them can be adjusted interactively. Together with the particle history, these measures define a probability for particles to multiply or die, respectively. There is no communication between the particles and no neighbourhood information has to be maintained. Thus, the particles can be handled in parallel and support a real‐time investigation of flow fields. To enhance the visualization, the particles' properties and selected field measures are also used to specify the systems rendering parameters, such as colour and size. We demonstrate the effectiveness of our approach on different simulated vector fields from technical and medical applications.We present an interactive approach to analyse flow fields using a new type of particle system, which is composed of autonomous particles exploring the flow. While particles provide a very intuitive way to visualize flows, it is a challenge to capture the important features with such systems. Particles tend to cluster in regions of low velocity and regions of interest are often sparsely populated. To overcome these disadvantages, we propose an automatic adaption of the particle density with respect to local importance measures. These measures are user defined and the systems sensitivity to them can be adjusted interactively. Together with the particle history, these measures define a probability for particles to multiply or die, respectively. There is no communication between the particles and no neighbourhood information has to be maintained. Thus, the particles can be handled in parallel and support a real‐time investigation of flow fields. To enhance the visualization, the particles' properties and selected field measures are also used to specify the systems rendering parameters, such as colour and size. We demonstrate the effectiveness of our approach on different simulated vector fields from technical and medical applications.Item Ballet(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Lawonn, Kai; Günther, Tobias; Chen, Min and Benes, BedrichItem Bird's-Eye - Large-Scale Visual Analytics of City Dynamics using Social Location Data(The Eurographics Association and John Wiley & Sons Ltd., 2019) Krueger, Robert; Han, Qi; Ivanov, Nikolay; Mahtal, Sanae; Thom, Dennis; Pfister, Hanspeter; Ertl, Thomas; Gleicher, Michael and Viola, Ivan and Leitte, HeikeThe analysis of behavioral city dynamics, such as temporal patterns of visited places and citizens' mobility routines, is an essential task for urban and transportation planning. Social media applications such as Foursquare and Twitter provide access to large-scale and up-to-date dynamic movement data that not only help to understand the social life and pulse of a city but also to maintain and improve urban infrastructure. However, the fast growth rate of this data poses challenges for conventional methods to provide up-to-date, flexible analysis. Therefore, planning authorities barely consider it. We present a system and design study to leverage social media data that assist urban and transportation planners to achieve better monitoring and analysis of city dynamics such as visited places and mobility patterns in large metropolitan areas. We conducted a goal-and-task analysis with urban planning experts. To address these goals, we designed a system with a scalable data monitoring back-end and an interactive visual analytics interface. The monitoring component uses intelligent pre-aggregation to allow dynamic queries in near real-time. The visual analytics interface leverages unsupervised learning to reveal clusters, routines, and unusual behavior in massive data, allowing to understand patterns in time and space. We evaluated our approach based on a qualitative user study with urban planning experts which demonstrates that intuitive integration of advanced analytical tools with visual interfaces is pivotal in making behavioral city dynamics accessible to practitioners. Our interviews also revealed areas for future research.Item Bridging the Data Analysis Communication Gap Utilizing a Three-Component Summarized Line Graph(The Eurographics Association and John Wiley & Sons Ltd., 2019) Yau, Calvin; Karimzadeh, Morteza; Surakitbanharn, Chittayong; Elmqvist, Niklas; Ebert, David; Gleicher, Michael and Viola, Ivan and Leitte, HeikeCommunication-minded visualizations are designed to provide their audience-managers, decision-makers, and the public-with new knowledge. Authoring such visualizations effectively is challenging because the audience often lacks the expertise, context, and time that professional analysts have at their disposal to explore and understand datasets. We present a novel summarized line graph visualization technique designed specifically for data analysts to communicate data to decision-makers more effectively and efficiently. Our summarized line graph reduces a large and detailed dataset of multiple quantitative time-series into (1) representative data that provides a quick takeaway of the full dataset; (2) analytical highlights that distinguish specific insights of interest; and (3) a data envelope that summarizes the remaining aggregated data. Our summarized line graph achieved the best overall results when evaluated against line graphs, band graphs, stream graphs, and horizon graphs on four representative tasks.