38-Issue 1
Permanent URI for this collection
Browse
Browsing 38-Issue 1 by Issue Date
Now showing 1 - 20 of 45
Results Per Page
Sort Options
Item Real‐Time Facial Expression Transformation for Monocular RGB Video(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Ma, L.; Deng, Z.; Chen, Min and Benes, BedrichThis paper describes a novel real‐time end‐to‐end system for facial expression transformation, without the need of any driving source. Its core idea is to directly generate desired and photo‐realistic facial expressions on top of input monocular RGB video. Specifically, an unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. Then, it automatically transforms the source expression in an input video clip to a specified target expression through the combination of automated 3D face construction, the learned bi‐directional expression mapping and automated lip correction. It can be applied to new users without additional training. Its effectiveness is demonstrated through many experiments on faces from live and online video, with different identities, ages, speeches and expressions.This paper describes a novel real‐time end‐to‐end system for facial expression transformation, without the need of any driving source. Its core idea is to directly generate desired and photo‐realistic facial expressions on top of input monocular RGB video. Specifically, an unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. Then, it automatically transforms the source expression in an input video clip to a specified target expression through the combination of automated 3D face construction, the learned bi‐directional expression mapping and automated lip correction. It can be applied to new users without additional training. Its effectiveness is demonstrated through many experiments on faces from live and online video, with different identities, ages, speeches and expressions.Item Solid Geometry Processing on Deconstructed Domains(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Sellán, Silvia; Cheng, Herng Yi; Ma, Yuming; Dembowski, Mitchell; Jacobson, Alec; Chen, Min and Benes, BedrichMany tasks in geometry processing are modelled as variational problems solved numerically using the finite element method. For solid shapes, this requires a volumetric discretization, such as a boundary conforming tetrahedral mesh. Unfortunately, tetrahedral meshing remains an open challenge and existing methods either struggle to conform to complex boundary surfaces or require manual intervention to prevent failure. Rather than create a single volumetric mesh for the entire shape, we advocate for solid geometry processing on , where a large and complex shape is composed of overlapping solid subdomains. As each smaller and simpler part is now easier to tetrahedralize, the question becomes how to account for overlaps during problem modelling and how to couple solutions on each subdomain together . We explore how and why previous coupling methods fail, and propose a method that couples solid domains only along their boundary surfaces. We demonstrate the superiority of this method through empirical convergence tests and qualitative applications to solid geometry processing on a variety of popular second‐order and fourth‐order partial differential equations.Many tasks in geometry processing are modelled as variational problems solved numerically using the finite element method. For solid shapes, this requires a volumetric discretization, such as a boundary conforming tetrahedral mesh. Unfortunately, tetrahedral meshing remains an open challenge and existing methods either struggle to conform to complex boundary surfaces or require manual intervention to prevent failure. Rather than create a single volumetric mesh for the entire shape, we advocate for solid geometry processing on , where a large and complex shape is composed of overlapping solid subdomains. As each smaller and simpler part is now easier to tetrahedralize, the question becomes how to account for overlaps during problem modelling and how to couple solutions on each subdomain together . We explore how and why previous coupling methods fail, and propose a method that couples solid domains only along their boundary surfaces. We demonstrate the superiority of this method through empirical convergence tests and qualitative applications to solid geometry processing on a variety of popular second‐order and fourth‐order partial differential equations.Item MyEvents: A Personal Visual Analytics Approach for Mining Key Events and Knowledge Discovery in Support of Personal Reminiscence(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Parvinzamir, F.; Zhao, Y.; Deng, Z.; Dong, F.; Chen, Min and Benes, BedrichReminiscence is an important aspect in our life. It preserves precious memories, allows us to form our own identities and encourages us to accept the past. Our work takes the advantage of modern sensor technologies to support reminiscence, enabling self‐monitoring of personal activities and individual movement in space and time on a daily basis. This paper presents MyEvents, a web‐based personal visual analytics platform designed for non‐computing experts, that allows for the collection of long‐term location and movement data and the generation of event mementos. Our research is focused on two prominent goals in event reminiscence: (1) selection subjectivity and human involvement in the process of self‐knowledge discovery and memento creation; and (2) the enhancement of event familiarity by presenting target events and their related information for optimal memory recall and reminiscence. A novel multi‐significance event ranking model is proposed to determine significant events in the personal history according to user preferences for event category, frequency and regularity. The evaluation results show that MyEvents effectively fulfils the reminiscence goals and tasks.Reminiscence is an important aspect in our life. It preserves precious memories, allows us to form our own identities and encourages us to accept the past. Our work takes the advantage of modern sensor technologies to support reminiscence, enabling self‐monitoring of personal activities and individual movement in space and time on a daily basis. This paper presents MyEvents, a web‐based personal visual analytics platform designed for non‐computing experts, that allows for the collection of long‐term location and movement data and the generation of event mementos. Our research is focused on two prominent goals in event reminiscence: (1) selection subjectivity and human involvement in the process of self‐knowledge discovery and memento creation; and (2) the enhancement of event familiarity by presenting target events and their related information for optimal memory recall and reminiscence. A novel multi‐significance event ranking model is proposed to determine significant events in the personal history according to user preferences for event category, frequency and regularity. The evaluation results show that MyEvents effectively fulfils the reminiscence goals and tasks.Item A Probabilistic Steering Parameter Model for Deterministic Motion Planning Algorithms(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Agethen, Philipp; Gaisbauer, Felix; Rukzio, Enrico; Chen, Min and Benes, BedrichThe simulation of two‐dimensional human locomotion in a bird's eye perspective is a key technology for various domains to realistically predict walk paths. The generated trajectories, however, are frequently deviating from reality due to the usage of simplifying assumptions. For instance, common deterministic motion planning algorithms predominantly utilize a set of static steering parameters (e.g. maximum acceleration or velocity of the agent) to simulate the walking behaviour of a person. This procedure neglects important influence factors, which have a significant impact on the spatio‐temporal characteristics of the finally resulting motion—such as the operator's physical conditions or the probabilistic nature of the human locomotor system. In overcome this drawback, this paper presents an approach to derive probabilistic motion models from a database of captured human motions. Although being initially designed for industrial purposes, this method can be applied to a wide range of use cases while considering an arbitrary number of dependencies (input) and steering parameters (output). To underline its applicability, a probabilistic steering parameter model is implemented, which models velocity, angular velocity and acceleration as a function of the travel distances, path curvature and height of a respective person. Finally, the technical performance and advantages of this model are demonstrated within an evaluation.The simulation of two‐dimensional human locomotion in a bird's eye perspective is a key technology for various domains to realistically predict walk paths. The generated trajectories, however, are frequently deviating from reality due to the usage of simplifying assumptions. For instance, common deterministic motion planning algorithms predominantly utilize a set of static steering parameters (e.g. maximum acceleration or velocity of the agent) to simulate the walking behaviour of a person. This procedure neglects important influence factors, which have a significant impact on the spatio‐temporal characteristics of the finally resulting motion—such as the operator's physical conditions or the probabilistic nature of the human locomotor system. In overcome this drawback, this paper presents an approach to derive probabilistic motion models from a database of captured human motions. Although being initially designed for industrial purposes, this method can be applied to a wide range of use cases while considering an arbitrary number of dependencies (input) and steering parameters (output).Item Autonomous Particles for Interactive Flow Visualization(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Engelke, Wito; Lawonn, Kai; Preim, Bernhard; Hotz, Ingrid; Chen, Min and Benes, BedrichWe present an interactive approach to analyse flow fields using a new type of particle system, which is composed of autonomous particles exploring the flow. While particles provide a very intuitive way to visualize flows, it is a challenge to capture the important features with such systems. Particles tend to cluster in regions of low velocity and regions of interest are often sparsely populated. To overcome these disadvantages, we propose an automatic adaption of the particle density with respect to local importance measures. These measures are user defined and the systems sensitivity to them can be adjusted interactively. Together with the particle history, these measures define a probability for particles to multiply or die, respectively. There is no communication between the particles and no neighbourhood information has to be maintained. Thus, the particles can be handled in parallel and support a real‐time investigation of flow fields. To enhance the visualization, the particles' properties and selected field measures are also used to specify the systems rendering parameters, such as colour and size. We demonstrate the effectiveness of our approach on different simulated vector fields from technical and medical applications.We present an interactive approach to analyse flow fields using a new type of particle system, which is composed of autonomous particles exploring the flow. While particles provide a very intuitive way to visualize flows, it is a challenge to capture the important features with such systems. Particles tend to cluster in regions of low velocity and regions of interest are often sparsely populated. To overcome these disadvantages, we propose an automatic adaption of the particle density with respect to local importance measures. These measures are user defined and the systems sensitivity to them can be adjusted interactively. Together with the particle history, these measures define a probability for particles to multiply or die, respectively. There is no communication between the particles and no neighbourhood information has to be maintained. Thus, the particles can be handled in parallel and support a real‐time investigation of flow fields. To enhance the visualization, the particles' properties and selected field measures are also used to specify the systems rendering parameters, such as colour and size. We demonstrate the effectiveness of our approach on different simulated vector fields from technical and medical applications.Item Optimal Sample Weights for Hemispherical Integral Quadratures(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Marques, Ricardo; Bouville, Christian; Bouatouch, Kadi; Chen, Min and Benes, BedrichThis paper proposes optimal quadrature rules over the hemisphere for the shading integral. We leverage recent work regarding the theory of quadrature rules over the sphere in order to derive a new theoretical framework for the general case of hemispherical quadrature error analysis. We then apply our framework to the case of the shading integral. We show that our quadrature error theory can be used to derive optimal sample weights (OSW) which account for both the features of the sampling pattern and the bidirectional reflectance distribution function (BRDF). Our method significantly outperforms familiar Quasi Monte Carlo (QMC) and stochastic Monte Carlo techniques. Our results show that the OSW are very effective in compensating for possible irregularities in the sample distribution. This allows, for example, to significantly exceed the regular convergence rate of stochastic Monte Carlo while keeping the exact same sample sets. Another important benefit of our method is that OSW can be applied whatever the sampling points distribution: the sample distribution need not follow a probability density function, which makes our technique much more flexible than QMC or stochastic Monte Carlo solutions. In particular, our theoretical framework allows to easily combine point sets derived from different sampling strategies (e.g. targeted to diffuse and glossy BRDF). In this context, our rendering results show that our approach overcomes MIS (Multiple Importance Sampling) techniques.This paper proposes optimal quadrature rules over the hemisphere for the shading integral. We leverage recent work regarding the theory of quadrature rules over the sphere in order to derive a new theoretical framework for the general case of hemispherical quadrature error analysis. We then apply our framework to the case of the shading integral. We show that our quadrature error theory can be used to derive optimal sample weights (OSW) which account for both the features of the sampling pattern and the material reflectance function (BRDF). Our method significantly outperforms familiar Quasi Monte Carlo (QMC) and stochastic Monte Carlo techniques. Our results show that the OSW are very effective in compensating for possible irregularities in the sample distribution. This allows, for example, to significantly exceed the regular convergence rate of stochastic Monte Carlo while keeping the exact same sample sets.Item Shading‐Based Surface Recovery Using Subdivision‐Based Representation(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Deng, Teng; Zheng, Jianmin; Cai, Jianfei; Cham, Tat‐Jen; Chen, Min and Benes, BedrichThis paper presents subdivision‐based representations for both lighting and geometry in shape‐from‐shading. A very recent shading‐based method introduced a per‐vertex overall illumination model for surface reconstruction, which has advantage of conveniently handling complicated lighting condition and avoiding explicit estimation of visibility and varied albedo. However, due to its discrete nature, the per‐vertex overall illumination requires a large amount of memory and lacks intrinsic coherence. To overcome these problems, in this paper we propose to use classic subdivision to define the basic smooth lighting function and surface, and introduce additional independent variables into the subdivision to adaptively model sharp changes of illumination and geometry. Compared to previous works, the new model not only preserves the merits of the per‐vertex illumination model, but also greatly reduces the number of variables required in surface recovery and intrinsically regularizes the illumination vectors and the surface. These features make the new model very suitable for multi‐view stereo surface reconstruction under general, unknown illumination condition. Particularly, a variational surface reconstruction method built upon the subdivision representations for lighting and geometry is developed. The experiments on both synthetic and real‐world data sets have demonstrated that the proposed method can achieve memory efficiency and improve surface detail recovery.This paper presents subdivision‐based representations for both lighting and geometry in shape‐from‐shading. A very recent shading‐based method introduced a per‐vertex overall illumination model for surface reconstruction, which has advantage of conveniently handling complicated lighting condition and avoiding explicit estimation of visibility and varied albedo. However, due to its discrete nature, the per‐vertex overall illumination requires a large amount of memory and lacks intrinsic coherence. To overcome these problems, in this paper we propose to use classic subdivision to define the basic smooth lighting function and surface, and introduce additional independent variables into the subdivision to adaptively model sharp changes of illumination and geometry. Compared to previous works, the new model not only preserves the merits of the per‐vertex illumination model, but also greatly reduces the number of variables required in surface recovery and intrinsically regularizes the illumination vectors and the surface. These features make the new model very suitable for multi‐view stereo surface reconstruction under general, unknown illumination condition. Particularly, a variational surface reconstruction method built upon the subdivision representations for lighting and geometry is developed. The experiments on both synthetic and real‐world data sets have demonstrated that the proposed method can achieve memory efficiency and improve surface detail recovery.Item A Survey of Information Visualization Books(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Rees, D.; Laramee, R. S.; Chen, Min and Benes, BedrichInformation visualization is a rapidly evolving field with a growing volume of scientific literature and texts continually published. To keep abreast of the latest developments in the domain, survey papers and state‐of‐the‐art reviews provide valuable tools for managing the large quantity of scientific literature. Recently, a survey of survey papers was published to keep track of the quantity of refereed survey papers in information visualization conferences and journals. However, no such resources exist to inform readers of the large volume of books being published on the subject, leaving the possibility of valuable knowledge being overlooked. We present the first literature survey of information visualization books that addresses this challenge by surveying the large volume of books on the topic of information visualization and visual analytics. This unique survey addresses some special challenges associated with collections of books (as opposed to research papers) including searching, browsing and cost. This paper features a novel two‐level classification based on both books and chapter topics examined in each book, enabling the reader to quickly identify to what depth a topic of interest is covered within a particular book. Readers can use this survey to identify the most relevant book for their needs amongst a quickly expanding collection. In indexing the landscape of information visualization books, this survey provides a valuable resource to both experienced researchers and newcomers in the data visualization discipline.We present the first literature survey of information visualization books, providing a resource to both experienced researchers and newcomers in the data visualization discipline. This paper features a novel two‐level classification based on both books and chapter topics examined in each book, enabling the reader to quickly identify to what depth a topic of interest is covered within a book. Readers can use this survey to identify the most relevant book for their needs amongst a quickly expanding collection.Item Filtered Quadrics for High‐Speed Geometry Smoothing and Clustering(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Legrand, Hélène; Thiery, Jean‐Marc; Boubekeur, Tamy; Chen, Min and Benes, BedrichModern 3D capture pipelines produce dense surface meshes at high speed, which challenge geometric operators to process such massive data on‐the‐fly. In particular, aiming at instantaneous feature‐preserving smoothing and clustering disqualifies global variational optimizers and one usually relies on high‐performance parallel kernels based on simple measures performed on the positions and normal vectors associated with the surface vertices. Although these operators are effective on small supports, they fail at properly capturing larger scale surface structures. To cope with this problem, we propose to enrich the surface representation with filtered quadrics, a compact and discriminating range space to guide processing. Compared to normal‐based approaches, this additional vertex attribute significantly improves feature preservation for fast bilateral filtering and mode‐seeking clustering, while exhibiting a linear memory cost in the number of vertices and retaining the simplicity of convolutional filters. In particular, the overall performance of our approach stems from its natural compatibility with modern fine‐grained parallel computing architectures such as graphics processor units (GPU). As a result, filtered quadrics offer a superior ability to handle a broad spectrum of frequencies and preserve large salient structures, delivering meshes on‐the‐fly for interactive and streaming applications, as well as quickly processing large data collections, instrumental in learning‐based geometry analysis.Item Flexible Use of Temporal and Spatial Reasoning for Fast and Scalable CPU Broad‐Phase Collision Detection Using KD‐Trees(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Serpa, Ygor Rebouças; Rodrigues, Maria Andréia Formico; Chen, Min and Benes, BedrichRealistic computer simulations of physical elements such as rigid and deformable bodies, particles and fractures are commonplace in the modern world. In these simulations, the broad‐phase collision detection plays an important role in ensuring that simulations can scale with the number of objects. In these applications, several degrees of motion coherency, distinct spatial distributions and different types of objects exist; however, few attempts have been made at a generally applicable solution for their broad phase. In this regard, this work presents a novel broad‐phase collision detection algorithm based upon a hybrid SIMD optimized KD‐Tree and sweep‐and‐prune, aimed at general applicability. Our solution is optimized for several objects distributions, degrees of motion coherence and varying object sizes. These features are made possible by an efficient and idempotent two‐step tree optimization algorithm and by selectively enabling coherency optimizations. We have tested our solution under varying scenario setups and compared it to other solutions available in the literature and industry, up to a million simulated objects. The results show that our solution is competitive, with average performance values two to three times better than those achieved by other state‐of‐the‐art AABB‐based CPU solutions.Realistic computer simulations of physical elements such as rigid and deformable bodies, particles and fractures are commonplace in the modern world. In these simulations, the broad‐phase collision detection plays an important role in ensuring that simulations can scale with the number of objects. In these applications, several degrees of motion coherency, distinct spatial distributions and different types of objects exist; however, few attempts have been made at a generally applicable solution for their broad phase. In this regard, this work presents a novel broad‐phase collision detection algorithm based upon a hybrid SIMD optimized KD‐Tree and sweep‐and‐prune, aimed at general applicability. Our solution is optimized for several objects distributions, degrees of motion coherence and varying object sizes. These features are made possible by an efficient and idempotent two‐step tree optimization algorithm and by selectively enabling coherency optimizations.Item Robust Structure‐Based Shape Correspondence(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Kleiman, Yanir; Ovsjanikov, Maks; Chen, Min and Benes, BedrichWe present a robust method to find region‐level correspondences between shapes, which are invariant to changes in geometry and applicable across multiple shape representations. We generate simplified shape graphs by jointly decomposing the shapes, and devise an adapted graph‐matching technique, from which we infer correspondences between shape regions. The simplified shape graphs are designed to primarily capture the overall structure of the shapes, without reflecting precise information about the geometry of each region, which enables us to find correspondences between shapes that might have significant geometric differences. Moreover, due to the special care we take to ensure the robustness of each part of our pipeline, our method can find correspondences between shapes with different representations, such as triangular meshes and point clouds. We demonstrate that the region‐wise matching that we obtain can be used to find correspondences between feature points, reveal the intrinsic self‐similarities of each shape and even construct point‐to‐point maps across shapes. Our method is both time and space efficient, leading to a pipeline that is significantly faster than comparable approaches. We demonstrate the performance of our approach through an extensive quantitative and qualitative evaluation on several benchmarks where we achieve comparable or superior performance to existing methods.We present a robust method to find region‐level correspondences between shapes, which are invariant to changes in geometry and applicable across multiple shape representations. We generate simplified shape graphs by jointly decomposing the shapes, and devise an adapted graph‐matching technique, from which we infer correspondences between shape regions. The simplified shape graphs are designed to primarily capture the overall structure of the shapes, without reflecting precise information about the geometry of each region, which enables us to find correspondences between shapes that might have significant geometric differences. Moreover, due to the special care we take to ensure the robustness of each part of our pipeline, our method can find correspondences between shapes with different representations, such as triangular meshes and point clouds.Item Incremental Labelling of Voronoi Vertices for Shape Reconstruction(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Peethambaran, J.; Parakkat, A.D.; Tagliasacchi, A.; Wang, R.; Muthuganapathy, R.; Chen, Min and Benes, BedrichWe present an incremental Voronoi vertex labelling algorithm for approximating contours, medial axes and dominant points (high curvature points) from 2D point sets. Though there exist many number of algorithms for reconstructing curves, medial axes or dominant points, a unified framework capable of approximating all the three in one place from points is missing in the literature. Our algorithm estimates the normals at each sample point through poles (farthest Voronoi vertices of a sample point) and uses the estimated normals and the corresponding tangents to determine the spatial locations (inner or outer) of the Voronoi vertices with respect to the original curve. The vertex classification helps to construct a piece‐wise linear approximation to the object boundary. We provide a theoretical analysis of the algorithm for points non‐uniformly (ε‐sampling) sampled from simple, closed, concave and smooth curves. The proposed framework has been thoroughly evaluated for its usefulness using various test data. Results indicate that even sparsely and non‐uniformly sampled curves with outliers or collection of curves are faithfully reconstructed by the proposed algorithm.We present an incremental Voronoi vertex labelling algorithm for approximating contours, medial axes and dominant points (high curvature points) from 2D point sets. Though there exist many number of algorithms for reconstructing curves, medial axes or dominant points, a unified framework capable of approximating all the three in one place from points is missing in the literature. Our algorithm estimates the normals at each sample point through poles (farthest Voronoi vertices of a sample point) and uses the estimated normals and the corresponding tangents to determine the spatial locations (inner or outer) of the Voronoi vertices with respect to the original curve. The vertex classification helps to construct a piece‐wise linear approximation to the object boundary. We provide a theoretical analysis of the algorithm for points non‐uniformly (ε‐sampling) sampled from simple, closed, concave and smooth curves.Item FitConnect: Connecting Noisy 2D Samples by Fitted Neighbourhoods(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Ohrhallinger, S.; Wimmer, M.; Chen, Min and Benes, BedrichWe propose a parameter‐free method to recover manifold connectivity in unstructured 2D point clouds with high noise in terms of the local feature size. This enables us to capture the features which emerge out of the noise. To achieve this, we extend the reconstruction algorithm , which connects samples to two (noise‐free) neighbours and has been proven to output a manifold for a relaxed sampling condition. Applying this condition to noisy samples by projecting their ‐nearest neighbourhoods onto local circular fits leads to multiple candidate neighbour pairs and thus makes connecting them consistently an NP‐hard problem. To solve this efficiently, we design an algorithm that searches that solution space iteratively on different scales of . It achieves linear time complexity in terms of point count plus quadratic time in the size of noise clusters. Our algorithm extends seamlessly to connect both samples with and without noise, performs as local as the recovered features and can output multiple open or closed piecewise curves. Incidentally, our method simplifies the output geometry by eliminating all but a representative point from noisy clusters. Since local neighbourhood fits overlap consistently, the resulting connectivity represents an ordering of the samples along a manifold. This permits us to simply blend the local fits for denoising with the locally estimated noise extent. Aside from applications like reconstructing silhouettes of noisy sensed data, this lays important groundwork to improve surface reconstruction in 3D. Our open‐source algorithm is available online.We propose a parameter‐free method to recover manifold connectivity in unstructured 2D point clouds with high noise in terms of the local feature size. This enables us to capture the features which emerge out of the noise. To achieve this, we extend the reconstruction algorithm , which connects samples to two (noise‐free) neighbours and has been proven to output a manifold for a relaxed sampling condition. Applying this condition to noisy samples by projecting their ‐nearest neighbourhoods onto local circular fits leads to multiple candidate neighbour pairs and thus makes connecting them consistently an NP‐hard problem. To solve this efficiently, we design an algorithm that searches that solution space iteratively on different scales of . It achieves linear time complexity in terms of point count plus quadratic time in the size of noise clusters.Item TexNN: Fast Texture Encoding Using Neural Networks(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Pratapa, S.; Olson, T.; Chalfin, A.; Manocha, D.; Chen, Min and Benes, BedrichWe present a novel deep learning‐based method for fast encoding of textures into current texture compression formats. Our approach uses state‐of‐the‐art neural network methods to compute the appropriate encoding configurations for fast compression. A key bottleneck in the current encoding algorithms is the search step, and we reduce that computation to a classification problem. We use a trained neural network approximation to quickly compute the encoding configuration for a given texture. We have evaluated our approach for compressing the textures for the widely used adaptive scalable texture compression format and evaluate the performance for different block sizes corresponding to 4 × 4, 6 × 6 and 8 × 8. Overall, our method (TexNN) speeds up the encoding computation up to an order of magnitude compared to prior compression algorithms with very little or no loss in the visual quality.We present a novel deep learning‐based method for fast encoding of textures into current texture compression formats. Our approach uses state‐of‐the‐art neural network methods to compute the appropriate encoding configurations for fast compression. A key bottleneck in the current encoding algorithms is the search step, and we reduce that computation to a classification problem. We use a trained neural network approximation to quickly compute the encoding configuration for a given texture.We have evaluated our approach for compressing the textures for the widely used adaptive scalable texture compression format and evaluate the performance for different block sizes corresponding to 4 × 4, 6 × 6 and 8 × 8.Item Privacy Preserving Visualization: A Study on Event Sequence Data(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Chou, Jia‐Kai; Wang, Yang; Ma, Kwan‐Liu; Chen, Min and Benes, BedrichThe inconceivable ability and common practice to collect personal data as well as the power of data‐driven approaches to businesses, services and security nowadays also introduce significant privacy issues. There have been extensive studies on addressing privacy preserving problems in the data mining community but relatively few have provided supervised control over the anonymization process. Preserving both the value and privacy of the data is largely a non‐trivial task. We present the design and evaluation of a visual interface that assists users in employing commonly used data anonymization techniques for making privacy preserving visualizations. Specifically, we focus on event sequence data due to its vulnerability to privacy concerns. Our interface is designed for data owners to examine potential privacy issues, obfuscate information as suggested by the algorithm and fine‐tune the results per their discretion. Multiple use case scenarios demonstrate the utility of our design. A user study similarly investigates the effectiveness of the privacy preserving strategies. Our results show that using a visual‐based interface is effective for identifying potential privacy issues, for revealing underlying anonymization processes, and for allowing users to balance between data utility and privacy.The inconceivable ability and common practice to collect personal data as well as the power of data‐driven approaches to businesses, services and security nowadays also introduce significant privacy issues. There have been extensive studies on addressing privacy preserving problems in the data mining community but relatively few have provided supervised control over the anonymization process. Preserving both the value and privacy of the data is largely a non‐trivial task. We present the design and evaluation of a visual interface that assists users in employing commonly used data anonymization techniques for making privacy preserving visualizations. Specifically, we focus on event sequence data due to its vulnerability to privacy concerns. Our interface is designed for data owners to examine potential privacy issues, obfuscate information as suggested by the algorithm and fine‐tune the results per their discretion.Item MegaViews: Scalable Many‐View Rendering With Concurrent Scene‐View Hierarchy Traversal(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Kol, Timothy R.; Bauszat, Pablo; Lee, Sungkil; Eisemann, Elmar; Chen, Min and Benes, BedrichWe present a scalable solution to render complex scenes from a large amount of viewpoints. While previous approaches rely either on a scene or a view hierarchy to process multiple elements together, we make full use of both, enabling sublinear performance in terms of views and scene complexity. By concurrently traversing the hierarchies, we efficiently find shared information among views to amortize rendering costs. One example application is many‐light global illumination. Our solution accelerates shadow map generation for virtual point lights, whose number can now be raised to over a million while maintaining interactive rates.Item VisFM: Visual Analysis of Image Feature Matchings(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Li, Chenhui; Baciu, George; Chen, Min and Benes, BedrichFeature matching is the most basic and pervasive problem in computer vision and it has become a primary component in big data analytics. Many tools have been developed for extracting and matching features in video streams and image frames. However, one of the most basic tools, that is, a tool for simply visualizing matched features for the comparison and evaluation of computer vision algorithms is not generally available, especially when dealing with a large number of matching lines. We introduce VisFM, an integrated visual analysis system for comprehending and exploring image feature matchings. VisFM presents a matching view with an intuitive line bundling to provide useful insights regarding the quality of matched features. VisFM is capable of showing a summarization of the features and matchings through group view to assist domain experts in observing the feature matching patterns from multiple perspectives. VisFM incorporates a series of interactions for exploring the feature data. We demonstrate the visual efficacy of VisFM by applying it to three scenarios. An informal expert feedback, conducted by our collaborator in computer vision, demonstrates how VisFM can be used for comparing and analysing feature matchings when the goal is to improve an image retrieval algorithm.Feature matching is the most basic and pervasive problem in computer vision and it has become a primary component in big data analytics. Many tools have been developed for extracting and matching features in video streams and image frames. However, one of the most basic tools, that is, a tool for simply visualizing matched features for the comparison and evaluation of computer vision algorithms is not generally available, especially when dealing with a large number of matching lines. We introduce VisFM, an integrated visual analysis system for comprehending and exploring image feature matchings. VisFM presents a matching view with an intuitive line bundling to provide useful insights regarding the quality of matched features. VisFM is capable of showing a summarization of the features and matchings through group view to assist domain experts in observing the feature matching patterns from multiple perspectives.Item Learning A Stroke‐Based Representation for Fonts(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Balashova, Elena; Bermano, Amit H.; Kim, Vladimir G.; DiVerdi, Stephen; Hertzmann, Aaron; Funkhouser, Thomas; Chen, Min and Benes, BedrichDesigning fonts and typefaces is a difficult process for both beginner and expert typographers. Existing workflows require the designer to create every glyph, while adhering to many loosely defined design suggestions to achieve an aesthetically appealing and coherent character set. This process can be significantly simplified by exploiting the similar structure character glyphs present across different fonts and the shared stylistic elements within the same font. To capture these correlations, we propose learning a stroke‐based font representation from a collection of existing typefaces. To enable this, we develop a stroke‐based geometric model for glyphs, a fitting procedure to reparametrize arbitrary fonts to our representation. We demonstrate the effectiveness of our model through a manifold learning technique that estimates a low‐dimensional font space. Our representation captures a wide range of everyday fonts with topological variations and naturally handles discrete and continuous variations, such as presence and absence of stylistic elements as well as slants and weights. We show that our learned representation can be used for iteratively improving fit quality, as well as exploratory style applications such as completing a font from a subset of observed glyphs, interpolating or adding and removing stylistic elements in existing fonts.Item Ballet(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Lawonn, Kai; Günther, Tobias; Chen, Min and Benes, BedrichItem A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Mendes, D.; Caputo, F. M.; Giachetti, A.; Ferreira, A.; Jorge, J.; Chen, Min and Benes, BedrichInteractions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid‐air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non‐trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid‐air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state‐of‐the‐art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid‐air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.
- «
- 1 (current)
- 2
- 3
- »