Volume 40 (2021)
Permanent URI for this community
Browse
Browsing Volume 40 (2021) by Issue Date
Now showing 1 - 20 of 231
Results Per Page
Sort Options
Item Issue Information(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Benes, Bedrich and Hauser, HelwigItem Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Ye, Wenjie; Dong, Yue; Peers, Pieter; Guo, Baining; Benes, Bedrich and Hauser, HelwigIn this paper we present a novel method for recovering high‐resolution spatially‐varying isotropic surface reflectance of a planar exemplar from a flash‐lit close‐up video sequence captured with a regular hand‐held mobile phone. We do not require careful calibration of the camera and lighting parameters, but instead compute a per‐pixel flow map using a deep neural network to align the input video frames. For each video frame, we also extract the reflectance parameters, and warp the neural reflectance features directly using the per‐pixel flow, and subsequently pool the warped features. Our method facilitates convenient hand‐held acquisition of spatially‐varying surface reflectance with commodity hardware by non‐expert users. Furthermore, our method enables aggregation of reflectance features from surface points visible in only a subset of the captured video frames, enabling the creation of high‐resolution reflectance maps that exceed the native camera resolution. We demonstrate and validate our method on a variety of synthetic and real‐world spatially‐varying materials.Item Semantics-Guided Latent Space Exploration for Shape Generation(The Eurographics Association and John Wiley & Sons Ltd., 2021) Jahan, Tansin; Guan, Yanran; Kaick, Oliver van; Mitra, Niloy and Viola, IvanWe introduce an approach to incorporate user guidance into shape generation approaches based on deep networks. Generative networks such as autoencoders and generative adversarial networks are trained to encode shapes into latent vectors, effectively learning a latent shape space that can be sampled for generating new shapes. Our main idea is to enable users to explore the shape space with the use of high-level semantic keywords. Specifically, the user inputs a set of keywords that describe the general attributes of the shape to be generated, e.g., ''four legs'' for a chair. Then, our method maps the keywords to a subspace of the latent space, where the subspace captures the shapes possessing the specified attributes. The user then explores only this subspace to search for shapes that satisfy the design goal, in a process similar to using a parametric shape model. Our exploratory approach allows users to model shapes at a high level without the need for advanced artistic skills, in contrast to existing methods that allow to guide the generation with sketching or partial modeling of a shape. Our technical contribution to enable this exploration-based approach is the introduction of a label regression neural network coupled with shape encoder/decoder networks. The label regression network takes the user-provided keywords and maps them to distributions in the latent space. We show that our method allows users to explore the shape space and generate a variety of shapes with selected high-level attributes.Item A Deeper Understanding of Visualization-Text Interplay in Geographic Data-driven Stories(The Eurographics Association and John Wiley & Sons Ltd., 2021) Latif, Shahid; Chen, Siming; Beck, Fabian; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonData-driven stories comprise of visualizations and a textual narrative. The two representations coexist and complement each other. Although existing research has explored the design strategies and structure of such stories, it remains an open research question how the two representations play together on a detailed level and how they are linked with each other. In this paper, we aim at understanding the fine-grained interplay of text and visualizations in geographic data-driven stories. We focus on geographic content as it often includes complex spatiotemporal data presented as versatile visualizations and rich textual descriptions. We conduct a qualitative empirical study on 22 stories collected from a variety of news media outlets; 10 of the stories report the COVID-19 pandemic, the others cover diverse topics. We investigate the role of every sentence and visualization within the narrative to reveal how they reference each other and interact. Moreover, we explore the positioning and sequence of various parts of the narrative to find patterns that further consolidate the stories. Drawing from the findings, we discuss study implications with respect to best practices and possibilities to automate the report generation.Item Adaptive Compositing and Navigation of Variable Resolution Images(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Licorish, C.; Faraj, N.; Summa, B.; Benes, Bedrich and Hauser, HelwigWe present a new, high‐quality compositing pipeline and navigation approach for variable resolution imagery. The motivation of this work is to explore the use of variable resolution images as a quick and accessible alternative to traditional gigapixel mosaics. Instead of the common tedious acquisition of many images using specialized hardware, variable resolution images can achieve similarly deep zooms as large mosaics, but with only a handful of images. For this approach to be a viable alternative, the state‐of‐the‐art in variable resolution compositing needs to be improved to match the high‐quality approaches commonly used in mosaic compositing. To this end, we provide a novel, variable resolution mosaic seam calculation and gradient domain color correction. This approach includes a new priority order graph cuts computation along with a practical data structure to keep memory overhead low. In addition, navigating variable resolution images is challenging, especially at the zoom factors targeted in this work. To address this challenge, we introduce a new image interaction for variable resolution imagery: a pan that automatically, and smoothly, hugs available resolution. Finally, we provide several real‐world examples of our approach producing high‐quality variable resolution mosaics with deep zooms typically associated with gigapixel photography.Item Deep Learning-Based Unsupervised Human Facial Retargeting(The Eurographics Association and John Wiley & Sons Ltd., 2021) Kim, Seonghyeon; Jung, Sunjin; Seo, Kwanggyoon; Ribera, Roger Blanco i; Noh, Junyong; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranTraditional approaches to retarget existing facial blendshape animations to other characters rely heavily on manually paired data including corresponding anchors, expressions, or semantic parametrizations to preserve the characteristics of the original performance. In this paper, inspired by recent developments in face swapping and reenactment, we propose a novel unsupervised learning method that reformulates the retargeting of 3D facial blendshape-based animations in the image domain. The expressions of a source model is transferred to a target model via the rendered images of the source animation. For this purpose, a reenactment network is trained with the rendered images of various expressions created by the source and target models in a shared latent space. The use of shared latent space enable an automatic cross-mapping obviating the need for manual pairing. Next, a blendshape prediction network is used to extract the blendshape weights from the translated image to complete the retargeting of the animation onto a 3D target model. Our method allows for fully unsupervised retargeting of facial expressions between models of different configurations, and once trained, is suitable for automatic real-time applications.Item Sampling from Quadric-Based CSG Surfaces(The Eurographics Association and John Wiley & Sons Ltd., 2021) Trettner, Philip; Kobbelt, Leif; Binder, Nikolaus and Ritschel, TobiasWe present an efficient method to create samples directly on surfaces defined by constructive solid geometry (CSG) trees or graphs. The generated samples can be used for visualization or as an approximation to the actual surface with strong guarantees. We chose to use quadric surfaces as CSG primitives as they can model classical primitives such as planes, cubes, spheres, cylinders, and ellipsoids, but also certain saddle surfaces. More importantly, they are closed under affine transformations, a desirable property for a modeling system. We also propose a rendering method that performs local quadric ray-tracing and clipping to achieve pixel-perfect accuracy and hole-free rendering.Item Patch Erosion for Deformable Lapped Textures on 3D Fluids(The Eurographics Association and John Wiley & Sons Ltd., 2021) Gagnon, Jonathan; Guzmán, Julián E.; Mould, David; Paquette, Eric; Mitra, Niloy and Viola, IvanWe propose an approach to synthesise a texture on an animated fluid free surface using a distortion metric combined with a feature map. Our approach is applied as a post-process to a fluid simulation. We advect deformable patches to move the texture along the fluid flow. The patches are covering the whole surface every frame of the animation in an overlapping fashion. Using lapped textures combined with deformable patches, we successfully remove blending artifact and rigid artifact seen in previous methods. We remain faithful to the texture exemplar by removing distorted patch texels using a patch erosion process. The patch erosion is based on a feature map provided together with the exemplar as inputs to our approach. The erosion favors removing texels toward the boundary of the patch as well as texels corresponding to more distorted regions of the patch. Where texels are removed leaving a gap on the surface, we add new patches below existing ones. The result is an animated texture following the velocity field of the fluid. We compared our results with recent work and our results show that our approach removes ghosting and temporal fading artifacts.Item High Performance Graphics 2021 CGF 40-8: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2021) Binder, Nikolaus; Ritschel, Tobias; Binder, Nikolaus and Ritschel, TobiasItem Delaunay Meshing and Repairing of NURBS Models(The Eurographics Association and John Wiley & Sons Ltd., 2021) Xiao, Xiao; Alliez, Pierre; Busé, Laurent; Rineau, Laurent; Digne, Julie and Crane, KeenanCAD models represented by NURBS surface patches are often hampered with defects due to inaccurate representations of trimming curves. Such defects make these models unsuitable to the direct generation of valid volume meshes, and often require trial-and-error processes to fix them. We propose a fully automated Delaunay-based meshing approach which can mesh and repair simultaneously, while being independent of the input NURBS patch layout. Our approach proceeds by Delaunay filtering and refinement, in which trimmed areas are repaired through implicit surfaces. Beyond repair, we demonstrate its capability to smooth out sharp features, defeature small details, and mesh multiple domains in contact.Item Visual Analysis of Spatio-temporal Phenomena with 1D Projections(The Eurographics Association and John Wiley & Sons Ltd., 2021) Franke, Max; Martin, Henry; Koch, Steffen; Kurzhals, Kuno; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonIt is crucial to visually extrapolate the characteristics of their evolution to understand critical spatio-temporal events such as earthquakes, fires, or the spreading of a disease. Animations embedded in the spatial context can be helpful for understanding details, but have proven to be less effective for overview and comparison tasks. We present an interactive approach for the exploration of spatio-temporal data, based on a set of neighborhood-preserving 1D projections which help identify patterns and support the comparison of numerous time steps and multivariate data. An important objective of the proposed approach is the visual description of local neighborhoods in the 1D projection to reveal patterns of similarity and propagation. As this locality cannot generally be guaranteed, we provide a selection of different projection techniques, as well as a hierarchical approach, to support the analysis of different data characteristics. In addition, we offer an interactive exploration technique to reorganize and improve the mapping locally to users' foci of interest. We demonstrate the usefulness of our approach with different real-world application scenarios and discuss the feedback we received from domain and visualization experts.Item Anisotropic Spectral Manifold Wavelet Descriptor(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Li, Qinsong; Hu, Ling; Liu, Shengjun; Yang, Dangfu; Liu, Xinru; Benes, Bedrich and Hauser, HelwigIn this paper, we present a powerful spectral shape descriptor for shape analysis, named Anisotropic Spectral Manifold Wavelet Descriptor (ASMWD). We proposed a novel manifold harmonic signal processing tool termed Anisotropic Spectral Manifold Wavelet Transform (ASMWT) first. ASMWT allows to comprehensively analyse signals from multiple wavelet diffusion directions on local manifold regions of the shape with a series of low‐pass and band‐pass frequency filters in each direction. Based on the ASMWT coefficients of a very simple signal, the ASMWD is efficiently constructed as a localizable and discriminative multi‐scale point descriptor. Since the wavelets used in our descriptor are direction‐sensitive and able to robustly reconstruct the signals with a finite number of scales, it makes our descriptor compact, efficient, and unambiguous under intrinsic symmetry. The extensive experiments demonstrate that our descriptor achieves significantly better performance than the state‐of‐the‐art descriptors and can greatly improve the performance of shape matching methods including both handcrafted and learning‐based methods.Item An Efficient Hybrid Optimization Strategy for Surface Reconstruction(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Bertolino, Giulia; Montemurro, Marco; Perry, Nicolas; Pourroy, Franck; Benes, Bedrich and Hauser, HelwigAn efficient surface reconstruction strategy is presented in this study, which is able to approximate non‐convex sets of target points (TPs). The approach is split in two phases: (a) the mapping phase, making use of the shape preserving method (SPM) to get a proper parametrization of each sub‐domain composing the TPs set; (b) the fitting phase, where each patch is fitted by means of a suitable non‐uniform rational basis spline (NURBS) surface by considering, as design variables, all parameters involved in its definition. To this purpose, the surface fitting problem is formulated as a constrained non‐linear programming problem (CNLPP) defined over a domain having changing dimension, wherein both the number and the value of the design variables are optimized. To deal with this CNLPP, the optimization process is split in two steps. Firstly, a special genetic algorithm (GA) optimizes both the value and the number of design variables by means of a two‐level evolution strategy (species and individuals). Secondly, the solution provided by the GA constitutes the initial guess for the deterministic optimization, which aims at improving the accuracy of the fitting surfaces. The effectiveness of the proposed methodology is proven through some meaningful benchmarks taken from the literature.Item Rendering Point Clouds with Compute Shaders and Vertex Order Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2021) Schütz, Markus; Kerbl, Bernhard; Wimmer, Michael; Bousseau, Adrien and McGuire, MorganIn this paper, we present several compute-based point cloud rendering approaches that outperform the hardware pipeline by up to an order of magnitude and achieve significantly better frame times than previous compute-based methods. Beyond basic closest-point rendering, we also introduce a fast, high-quality variant to reduce aliasing. We present and evaluate several variants of our proposed methods with different flavors of optimization, in order to ensure their applicability and achieve optimal performance on a range of platforms and architectures with varying support for novel GPU hardware features. During our experiments, the observed peak performance was reached rendering 796 million points (12.7GB) at rates of 62 to 64 frames per second (50 billion points per second, 802GB/s) on an RTX 3090 without the use of level-of-detail structures. We further introduce an optimized vertex order for point clouds to boost the efficiency of GL_POINTS by a factor of 5x in cases where hardware rendering is compulsory. We compare different orderings and show that Morton sorted buffers are faster for some viewpoints, while shuffled vertex buffers are faster in others. In contrast, combining both approaches by first sorting according to Morton-code and shuffling the resulting sequence in batches of 128 points leads to a vertex buffer layout with high rendering performance and low sensitivity to viewpoint changes.Item Diverse Dance Synthesis via Keyframes with Transformer Controllers(The Eurographics Association and John Wiley & Sons Ltd., 2021) Pan, Junjun; Wang, Siyuan; Bai, Junxuan; Dai, Ju; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranExisting keyframe-based motion synthesis mainly focuses on the generation of cyclic actions or short-term motion, such as walking, running, and transitions between close postures. However, these methods will significantly degrade the naturalness and diversity of the synthesized motion when dealing with complex and impromptu movements, e.g., dance performance and martial arts. In addition, current research lacks fine-grained control over the generated motion, which is essential for intelligent human-computer interaction and animation creation. In this paper, we propose a novel keyframe-based motion generation network based on multiple constraints, which can achieve diverse dance synthesis via learned knowledge. Specifically, the algorithm is mainly formulated based on the recurrent neural network (RNN) and the Transformer architecture. The backbone of our network is a hierarchical RNN module composed of two long short-term memory (LSTM) units, in which the first LSTM is utilized to embed the posture information of the historical frames into a latent space, and the second one is employed to predict the human posture for the next frame. Moreover, our framework contains two Transformer-based controllers, which are used to model the constraints of the root trajectory and the velocity factor respectively, so as to better utilize the temporal context of the frames and achieve fine-grained motion control. We verify the proposed approach on a dance dataset containing a wide range of contemporary dance. The results of three quantitative analyses validate the superiority of our algorithm. The video and qualitative experimental results demonstrate that the complex motion sequences generated by our algorithm can achieve diverse and smooth motion transitions between keyframes, even for long-term synthesis.Item VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2021) Chatzimparmpas, Angelos; Martins, Rafael M.; Kucher, Kostiantyn; Kerren, Andreas; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonDuring the training phase of machine learning (ML) models, it is usually necessary to configure several hyperparameters. This process is computationally intensive and requires an extensive search to infer the best hyperparameter set for the given problem. The challenge is exacerbated by the fact that most ML models are complex internally, and training involves trial-and-error processes that could remarkably affect the predictive result. Moreover, each hyperparameter of an ML algorithm is potentially intertwined with the others, and changing it might result in unforeseeable impacts on the remaining hyperparameters. Evolutionary optimization is a promising method to try and address those issues. According to this method, performant models are stored, while the remainder are improved through crossover and mutation processes inspired by genetic algorithms. We present VisEvol, a visual analytics tool that supports interactive exploration of hyperparameters and intervention in this evolutionary procedure. In summary, our proposed tool helps the user to generate new models through evolution and eventually explore powerful hyperparameter combinations in diverse regions of the extensive hyperparameter space. The outcome is a voting ensemble (with equal rights) that boosts the final predictive performance. The utility and applicability of VisEvol are demonstrated with two use cases and interviews with ML experts who evaluated the effectiveness of the tool.Item Thin Cloud Removal for Single RGB Aerial Image(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Song, Chengfang; Xiao, Chunxia; Zhang, Yeting; Sui, Haigang; Benes, Bedrich and Hauser, HelwigAcquired above variable clouds, aerial images contain the components of ground reflection and cloud effect. Due to the non‐uniformity, clouds in aerial images are even harder to remove than haze in terrestrial images. This paper proposes a divide‐and‐conquer scheme to remove the thin translucent clouds in a single RGB aerial image. Based on colour attenuation prior, we design a kind of veiling metric that indicates the local concentration of clouds effectively. By this metric, an aerial image containing thickness‐varied clouds is segmented into multiple regions. Each region is veiled by clouds of nearly‐equal concentration, and hence subject to common assumptions, such as boundary constraint on transmission. The atmospheric light in each region is estimated by the modified local colour‐line model and composed into a spatially‐varying airlight map for the entire image. Then scene transmission is estimated and further refined by a weighted ‐norm based contextual regularization. Finally, we recover ground reflection via the atmospheric scattering model. We verify our cloud removal method on a number of aerial images containing thin clouds and compare our results with classical single‐image dehazing methods and the state‐of‐the‐art learning‐based declouding method, respectively.Item Uncertainty-aware Visualization of Regional Time Series Correlation in Spatio-temporal Ensembles(The Eurographics Association and John Wiley & Sons Ltd., 2021) Evers, Marina; Huesmann, Karim; Linsen, Lars; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonGiven a time-varying scalar field, the analysis of correlations between different spatial regions, i.e., the linear dependence of time series within these regions, provides insights into the structural properties of the data. In this context, regions are connected components of the spatial domain with high time series correlations. The detection and analysis of such regions is often performed globally, which requires pairwise correlation computations that are quadratic in the number of spatial data samples. Thus, operations based on all pairwise correlations are computationally demanding, especially when dealing with ensembles that model the uncertainty in the spatio-temporal phenomena using multiple simulation runs. We propose a two-step procedure: In a first step, we map the spatial samples to a 3D embedding based on a pairwise correlation matrix computed from the ensemble of time series. The 3D embedding allows for a one-to-one mapping to a 3D color space such that the outcome can be visually investigated by rendering the colors for all samples in the spatial domain. In a second step, we generate a hierarchical image segmentation based on the color images. From then on, we can visually analyze correlations of regions at all levels in the hierarchy within an interactive setting, which includes the uncertainty-aware analysis of the region's time series correlation and respective time lags.Item Perceptual Quality of BRDF Approximations: Dataset and Metrics(The Eurographics Association and John Wiley & Sons Ltd., 2021) Lavoué, Guillaume; Bonneel, Nicolas; Farrugia, Jean-Philippe; Soler, Cyril; Mitra, Niloy and Viola, IvanBidirectional Reflectance Distribution Functions (BRDFs) are pivotal to the perceived realism in image synthesis. While measured BRDF datasets are available, reflectance functions are most of the time approximated by analytical formulas for storage efficiency reasons. These approximations are often obtained by minimizing metrics such as L2-or weighted quadratic- distances, but these metrics do not usually correlate well with perceptual quality when the BRDF is used in a rendering context, which motivates a perceptual study. The contributions of this paper are threefold. First, we perform a large-scale user study to assess the perceptual quality of 2026 BRDF approximations, resulting in 84138 judgments across 1005 unique participants. We explore this dataset and analyze perceptual scores based on material type and illumination. Second, we assess nine analytical BRDF models in their ability to approximate tabulated BRDFs. Third, we assess several image-based and BRDF-based (Lp, optimal transport and kernel distance) metrics in their ability to approximate perceptual similarity judgments.Item Primitive Object Grasping for Finger Motion Synthesis(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Hwang, Jae‐Pyung; Park, Gangrae; Suh, Il Hong; Kwon, Taesoo; Benes, Bedrich and Hauser, HelwigWe developed a new framework to generate hand and finger grasping motions. The proposed framework provides online adaptation to the position and orientation of objects and can generate grasping motions even when the object shape differs from that used during motion capture. This is achieved by using a mesh model, which we call primitive object grasping (POG), to represent the object grasping motion. The POG model uses a mesh deformation algorithm that keeps the original shape of the mesh while adapting to varying constraints. These characteristics are beneficial for finger grasping motion synthesis that satisfies constraints for mimicking the motion capture sequence and the grasping points reflecting the shape of the object. We verify the adaptability of the proposed motion synthesizer according to its position/orientation and shape variations of different objects by using motion capture sequences for grasping primitive objects, namely, a sphere, a cylinder, and a box. In addition, a different grasp strategy called a three‐finger grasp is synthesized to validate the generality of the POG‐based synthesis framework.