39-Issue 6
Permanent URI for this collection
Browse
Browsing 39-Issue 6 by Subject "animation"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Accelerating Liquid Simulation With an Improved Data‐Driven Method(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Gao, Yang; Zhang, Quancheng; Li, Shuai; Hao, Aimin; Qin, Hong; Benes, Bedrich and Hauser, HelwigIn physics‐based liquid simulation for graphics applications, pressure projection consumes a significant amount of computational time and is frequently the bottleneck of the computational efficiency. How to rapidly apply the pressure projection and at the same time how to accurately capture the liquid geometry are always among the most popular topics in the current research trend in liquid simulations. In this paper, we incorporate an artificial neural network into the simulation pipeline for handling the tricky projection step for liquid animation. Compared with the previous neural‐network‐based works for gas flows, this paper advocates new advances in the composition of representative features as well as the loss functions in order to facilitate fluid simulation with free‐surface boundary. Specifically, we choose both the velocity and the level‐set function as the additional representation of the fluid states, which allows not only the motion but also the boundary position to be considered in the neural network solver. Meanwhile, we use the divergence error in the loss function to further emulate the lifelike behaviours of liquid. With these arrangements, our method could greatly accelerate the pressure projection step in liquid simulation, while maintaining fairly convincing visual results. Additionally, our neutral network performs well when being applied to new scene synthesis even with varied boundaries or scales.Item Constructing Human Motion Manifold With Sequential Networks(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Jang, Deok‐Kyeong; Lee, Sung‐Hee; Benes, Bedrich and Hauser, HelwigThis paper presents a novel recurrent neural network‐based method to construct a latent motion manifold that can represent a wide range of human motions in a long sequence. We introduce several new components to increase the spatial and temporal coverage in motion space while retaining the details of motion capture data. These include new regularization terms for the motion manifold, combination of two complementary decoders for predicting joint rotations and joint velocities and the addition of the forward kinematics layer to consider both joint rotation and position errors. In addition, we propose a set of loss terms that improve the overall quality of the motion manifold from various aspects, such as the capability of reconstructing not only the motion but also the latent manifold vector, and the naturalness of the motion through adversarial loss. These components contribute to creating compact and versatile motion manifold that allows for creating new motions by performing random sampling and algebraic operations, such as interpolation and analogy, in the latent motion manifold.Item Data‐Driven Facial Simulation(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Romeo, M.; Schvartzman, S. C.; Benes, Bedrich and Hauser, HelwigIn Visual Effects, the creation of realistic facial performances is still a challenge that the industry is trying to overcome. Blendshape deformation is used to reproduce the action of different groups of muscles, which produces realistic static results. However, this is not sufficient to generate believable and detailed facial performances of animated digital characters.To increase the realism of facial performances, it is possible to enhance standard facial rigs using physical simulation approaches. However, setting up a simulation rig and controlling material properties according to the performance is not an easy task and could take a lot of time and iterations to get it right.We present a workflow that allows us to generate an activation map for the fibres of a set of superficial patches we call . The pseudo‐muscles are automatically identified using k‐means to cluster the data from the blendshape targets in the animation rig and compute the direction of their contraction (direction of the pseudo‐muscle fibres). We use an Extended Position–Based Dynamics solver to add physical simulation to the facial animation, controlling the behaviour of simulation through the activation map. We show the results achieved using the proposed solution on two digital humans and one fantastic cartoon character, demonstrating that the identified pseudo‐muscles approximate facial anatomy and the simulation properties are properly controlled, increasing the realism while preserving the work of animators.Item Hyperspectral Inverse Skinning(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Liu, Songrun; Tan, Jianchao; Deng, Zhigang; Gingold, Yotam; Benes, Bedrich and Hauser, HelwigIn example‐based inverse linear blend skinning (LBS), a collection of poses (e.g. animation frames) are given, and the goal is finding skinning weights and transformation matrices that closely reproduce the input. These poses may come from physical simulation, direct mesh editing, motion capture or another deformation rig. We provide a re‐formulation of inverse skinning as a problem in high‐dimensional Euclidean space. The transformation matrices applied to a vertex across all poses can be thought of as a point in high dimensions. We cast the inverse LBS problem as one of finding a tight‐fitting simplex around these points (a well‐studied problem in hyperspectral imaging). Although we do not observe transformation matrices directly, the 3D position of a vertex across all of its poses defines an affine subspace, or flat. We solve a ‘closest flat’ optimization problem to find points on these flats, and then compute a minimum‐volume enclosing simplex whose vertices are the transformation matrices and whose barycentric coordinates are the skinning weights. We are able to create LBS rigs with state‐of‐the‐art reconstruction error and state‐of‐the‐art compression ratios for mesh animation sequences. Our solution does not consider weight sparsity or the rigidity of recovered transformations. We include observations and insights into the closest flat problem. Its ideal solution and optimal LBS reconstruction error remain an open problem.Item Multi‐Level Memory Structures for Simulating and Rendering Smoothed Particle Hydrodynamics(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Winchenbach, R.; Kolb, A.; Benes, Bedrich and Hauser, HelwigIn this paper, we present a novel hash map‐based sparse data structure for Smoothed Particle Hydrodynamics, which allows for efficient neighbourhood queries in spatially adaptive simulations as well as direct ray tracing of fluid surfaces. Neighbourhood queries for adaptive simulations are improved by using multiple independent data structures utilizing the same underlying self‐similar particle ordering, to significantly reduce non‐neighbourhood particle accesses. Direct ray tracing is performed using an auxiliary data structure, with constant memory consumption, which allows for efficient traversal of the hash map‐based data structure as well as efficient intersection tests. Overall, our proposed method significantly improves the performance of spatially adaptive fluid simulations and allows for direct ray tracing of the fluid surface with little memory overhead.Item Real‐Time Deformation with Coupled Cages and Skeletons(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Corda, F.; Thiery, J. M.; Livesu, M.; Puppo, E.; Boubekeur, T.; Scateni, R.; Benes, Bedrich and Hauser, HelwigSkeleton‐based and cage‐based deformation techniques represent the two most popular approaches to control real‐time deformations of digital shapes and are, to a vast extent, complementary to one another. Despite their complementary roles, high‐end modelling packages do not allow for seamless integration of such control structures, thus inducing a considerable burden on the user to maintain them synchronized. In this paper, we propose a framework that seamlessly combines rigging skeletons and deformation cages, granting artists with a real‐time deformation system that operates using any smooth combination of the two approaches. By coupling the deformation spaces of cages and skeletons, we access a much larger space, containing poses that are impossible to obtain by acting solely on a skeleton or a cage. Our method is oblivious to the specific techniques used to perform skinning and cage‐based deformation, securing it compatible with pre‐existing tools. We demonstrate the usefulness of our hybrid approach on a variety of examples.Item Temporal Upsampling of Point Cloud Sequences by Optimal Transport for Plant Growth Visualization(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Golla, Tim; Kneiphof, Tom; Kuhlmann, Heiner; Weinmann, Michael; Klein, Reinhard; Benes, Bedrich and Hauser, HelwigPlant growth visualization from a series of 3D scanner measurements is a challenging task. Time intervals between successive measurements are typically too large to allow a smooth animation of the growth process. Therefore, obtaining a smooth animation of the plant growth process requires a temporal upsampling of the point cloud sequence in order to obtain approximations of the intermediate states between successive measurements. Additionally, there are suddenly arising structural changes due to the occurrence of new plant parts such as new branches or leaves. We present a novel method that addresses these challenges via semantic segmentation and the generation of a segment hierarchy per scan, the matching of the hierarchical representations of successive scans and the segment‐wise computation of optimal transport. The transport problems' solutions yield the information required for a realistic temporal upsampling, which is generated in real time. Thereby, our method does not require shape templates, good correspondences or huge databases of examples. Newly grown and decayed parts of the plant are detected as unmatched segments and are handled by identifying corresponding bifurcation points and introducing virtual segments in the previous, respectively successive time step. Our method allows the generation of realistic upsampled growth animations with moderate computational effort.