Volume 39 (2020)
Permanent URI for this community
Browse
Browsing Volume 39 (2020) by Subject "Animation"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Distant Collision Response in Rigid Body Simulations(The Eurographics Association and John Wiley & Sons Ltd., 2020) Coevoet, Eulalie; Andrews, Sheldon; Relles, Denali; Kry, Paul G.; Bender, Jan and Popa, TiberiuWe use a finite element model to predict the vibration response of objects in a rigid body simulation, such that rigid objects are augmented to provide a plausible elastic collision response between distant objects due to vibration. We start with a generalized eigenvalue decomposition of the elastic model to precompute a response to an impact at any point on an elastic object with fixed boundary conditions. Then, given a collision between objects, we generate an approximate response impulse to distribute to other objects already in contact with the colliding bodies. This can lead to distant impacts causing an object to slip, or a delicate stack of objects to fall. We also use a geodesic distance based spatial attenuation approximation for travelling waves in objects to respond to an impact at one contact with an impulse at other locations. This response ultimately allows a long distance relationship between contacts, both across a single object being struck, but also traversing the contact graph of a larger collection of objects. We qualitatively validate our approach with a ground truth simulation, and demonstrate a number of scenarios where a long distance relationship between contacts is valuable.Item Expression Packing: As-Few-As-Possible Training Expressions for Blendshape Transfer(The Eurographics Association and John Wiley & Sons Ltd., 2020) Carrigan, Emma; Zell, Eduard; Guiard, Cedric; McDonnell, Rachel; Panozzo, Daniele and Assarsson, UlfTo simplify and accelerate the creation of blendshape rigs, using a template rig is a common procedure, especially during the creation of digital doubles. Blendshape transfer methods facilitate copy and paste functionality of the blendshapes from the template model to the digital double. However, for adequate personalization, such methods require a set of scanned training expressions of the original actor. So far, the semantics of the facial expressions to scan have been defined manually. In contrast, we formulate the semantics of the facial expressions as an integer optimization of the blendshape weights. By combining different blendshapes of the template model, our method creates facial expressions that serve as semantic references during scanning. Our method guarantees to compute as-few-as-possible training expressions with minimal overlap of activated blendshapes. If the number of training expressions is limited, blendshapes are selected based on their power to personalize the resulting blendshapes compared to generic blendshape transfer methods.Item Fast Nonlinear Least Squares Optimization of Large-Scale Semi-Sparse Problems(The Eurographics Association and John Wiley & Sons Ltd., 2020) Fratarcangeli, Marco; Bradley, Derek; Gruber, Aurel; Zoss, Gaspard; Beeler, Thabo; Panozzo, Daniele and Assarsson, UlfMany problems in computer graphics and vision can be formulated as a nonlinear least squares optimization problem, for which numerous off-the-shelf solvers are readily available. Depending on the structure of the problem, however, existing solvers may be more or less suitable, and in some cases the solution comes at the cost of lengthy convergence times. One such case is semi-sparse optimization problems, emerging for example in localized facial performance reconstruction, where the nonlinear least squares problem can be composed of hundreds of thousands of cost functions, each one involving many of the optimization parameters. While such problems can be solved with existing solvers, the computation time can severely hinder the applicability of these methods. We introduce a novel iterative solver for nonlinear least squares optimization of large-scale semi-sparse problems. We use the nonlinear Levenberg-Marquardt method to locally linearize the problem in parallel, based on its firstorder approximation. Then, we decompose the linear problem in small blocks, using the local Schur complement, leading to a more compact linear system without loss of information. The resulting system is dense but its size is small enough to be solved using a parallel direct method in a short amount of time. The main benefit we get by using such an approach is that the overall optimization process is entirely parallel and scalable, making it suitable to be mapped onto graphics hardware (GPU). By using our minimizer, results are obtained up to one order of magnitude faster than other existing solvers, without sacrificing the generality and the accuracy of the model. We provide a detailed analysis of our approach and validate our results with the application of performance-based facial capture using a recently-proposed anatomical local face deformation model.Item Fracture Patterns Design for Anisotropic Models with the Material Point Method(The Eurographics Association and John Wiley & Sons Ltd., 2020) Cao, Wei; Lyu, Luan; Ren, Xiaohua; Zhang, Bob; Yang, Zhixin; Wu, Enhua; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LuePhysically plausible fracture animation is a challenging topic in computer graphics. Most of the existing approaches focus on the fracture of isotropic materials. We proposed a frame-field method for the design of anisotropic brittle fracture patterns. In this case, the material anisotropy is determined by two parts: anisotropic elastic deformation and anisotropic damage mechanics. For the elastic deformation, we reformulate the constitutive model of hyperelastic materials to achieve anisotropy by adding additional energy density functions in particular directions. For the damage evolution, we propose an improved phasefield fracture method to simulate the anisotropy by designing a deformation-aware second-order structural tensor. These two parts can present elastic anisotropy and fractured anisotropy independently, or they can be well coupled together to exhibit rich crack effects. To ensure the flexibility of simulation, we further introduce a frame-field concept to assist in setting local anisotropy, similar to the fiber orientation of textiles. For the discretization of the deformable object, we adopt a novel Material Point Method(MPM) according to its fracture-friendly nature. We also give some design criteria for anisotropic models through comparative analysis. Experiments show that our anisotropic method is able to be well integrated with the MPM scheme for simulating the dynamic fracture behavior of anisotropic materials.Item Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On(The Eurographics Association and John Wiley & Sons Ltd., 2020) Vidaurre, Raquel; Santesteban, Igor; Garces, Elena; Casas, Dan; Bender, Jan and Popa, TiberiuWe present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network. In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments, represented as parametric predefined 2D panels with arbitrary mesh topology, including long dresses, shirts, and tight tops. Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing: garment type, target body shape, and material. Specifically, we first learn a regressor that predicts the 3D drape of the input parametric garment when worn by a mean body shape. Then, after a mesh topology optimization step where we generate a sufficient level of detail for the input garment type, we further deform the mesh to reproduce deformations caused by the target body shape. Finally, we predict fine-scale details such as wrinkles that depend mostly on the garment material. We qualitatively and quantitatively demonstrate that our fully convolutional approach outperforms existing methods in terms of generalization capabilities and memory requirements, and therefore it opens the door to more general learning-based models for virtual try-on applications.Item Linear Time Stable PD Controllers for Physics-based Character Animation(The Eurographics Association and John Wiley & Sons Ltd., 2020) Yin, Zhiqi; Yin, KangKang; Bender, Jan and Popa, TiberiuIn physics-based character animation, Proportional-Derivative (PD) controllers are commonly used for tracking reference motions in motor control tasks. Stable PD (SPD) controllers significantly improve the numerical stability of traditional PD controllers and support large gains and large integration time steps during simulation [TLT11]. For an articulated rigid body system with n degrees of freedom, all SPD implementations to date, however, use an O(n3) dense matrix factorization based method. In this paper, we propose a linear time algorithm for SPD computation, which is based on Featherstone's forward dynamics formulation for articulated rigid body systems in generalized coordinates [Fea14]. We demonstrate the performance advantage of our algorithm by comparing with both the conventional dense matrix factorization based method and an alternative sparse matrix factorization based method.We show that the proposed algorithm provides superior stability when controlling complex models at large time steps. We further demonstrate that our algorithm can improve the learning speed and quality of a Deep Reinforcement Learning (DRL) system for physics-based character animation.Item A Pixel-Based Framework for Data-Driven Clothing(The Eurographics Association and John Wiley & Sons Ltd., 2020) Jin, Ning; Zhu, Yilin; Geng, Zhenglin; Fedkiw, Ron; Bender, Jan and Popa, TiberiuWe propose a novel approach to learning cloth deformation as a function of body pose, recasting the graph-like triangle mesh data structure into image-based data in order to leverage popular and well-developed convolutional neural networks (CNNs) in a two-dimensional Euclidean domain. Then, a three-dimensional animation of clothing is equivalent to a sequence of twodimensional RGB images driven/choreographed by time dependent joint angles. In order to reduce nonlinearity demands on the neural network, we utilize procedural skinning of the body surface to capture much of the rotation/deformation so that the RGB images only contain textures of displacement offsets from skin to clothing. Notably, we illustrate that our approach does not require accurate unclothed body shapes or robust skinning techniques. Additionally, we discuss how standard image based techniques such as image partitioning for higher resolution can readily be incorporated into our framework.Item Probabilistic Character Motion Synthesis using a Hierarchical Deep Latent Variable Model(The Eurographics Association and John Wiley & Sons Ltd., 2020) Ghorbani, Saeed; Wloka, Calden; Etemad, Ali; Brubaker, Marcus A.; Troje, Nikolaus F.; Bender, Jan and Popa, TiberiuWe present a probabilistic framework to generate character animations based on weak control signals, such that the synthesized motions are realistic while retaining the stochastic nature of human movement. The proposed architecture, which is designed as a hierarchical recurrent model, maps each sub-sequence of motions into a stochastic latent code using a variational autoencoder extended over the temporal domain. We also propose an objective function which respects the impact of each joint on the pose and compares the joint angles based on angular distance. We use two novel quantitative protocols and human qualitative assessment to demonstrate the ability of our model to generate convincing and diverse periodic and non-periodic motion sequences without the need for strong control signals.Item Realistic Buoyancy Model for Real‐Time Applications(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Bajo, J. M.; Patow, G.; Delrieux, C. A.; Benes, Bedrich and Hauser, HelwigFollowing Archimedes' Principle, any object immersed in a fluid is subject to an upward buoyancy force equal to the weight of the fluid displaced by the object. This simple description is the origin of a set of effects that are ubiquitous in nature, and are becoming commonplace in games, simulators and interactive animations. Although there are solutions to the fluid‐to‐solid coupling problem in some particular cases, to the best of our knowledge, comprehensive and accurate computational buoyancy models adequate in general contexts are still lacking. We propose a real‐time Graphics Processing Unit (GPU) based algorithm for realistic computation of the fluid‐to‐solid coupling problem, which is adequate for a wide generality of cases (solid or hollow objects, with permeable or leak‐proof surfaces, and with variable masses). The method incorporates the behaviour of the fluid into which the object is immersed, and decouples the computation of the physical parameters involved in the buoyancy force of the empty object from the mass of contained liquid. The dynamics of this mass of liquid are also computed, in a way such that the relation between the centre of mass of the object and the buoyancy force may vary, leading to complex, realistic beha viours such as the ones arising for instance with a sinking boat.Item SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans(The Eurographics Association and John Wiley & Sons Ltd., 2020) Santesteban, Igor; Garces, Elena; Otaduy, Miguel A.; Casas, Dan; Panozzo, Daniele and Assarsson, UlfWe present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion. Datasets to learn such task are scarce and expensive to generate, which makes training models prone to overfitting. At the core of our method there are three key contributions that enable us to model highly realistic dynamics and better generalization capabilities than state-of-the-art methods, while training on the same data. First, a novel motion descriptor that disentangles the standard pose representation by removing subject-specific features; second, a neural-network-based recurrent regressor that generalizes to unseen shapes and motions; and third, a highly efficient nonlinear deformation subspace capable of representing soft-tissue deformations of arbitrary shapes. We demonstrate qualitative and quantitative improvements over existing methods and, additionally, we show the robustness of our method on a variety of motion capture databases.Item Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows(The Eurographics Association and John Wiley & Sons Ltd., 2020) Alexanderson, Simon; Henter, Gustav Eje; Kucherenko, Taras; Beskow, Jonas; Panozzo, Daniele and Assarsson, UlfAutomatic synthesis of realistic gestures promises to transform the fields of animation, avatars and communicative agents. In off-line applications, novel tools can alter the role of an animator to that of a director, who provides only high-level input for the desired animation; a learned network then translates these instructions into an appropriate sequence of body poses. In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters. In this paper we address some of the core issues towards these ends. By adapting a deep learning-based motion synthesis method called MoGlow, we propose a new generative model for generating state-of-the-art realistic speech-driven gesticulation. Owing to the probabilistic nature of the approach, our model can produce a battery of different, yet plausible, gestures given the same input speech signal. Just like humans, this gives a rich natural variation of motion. We additionally demonstrate the ability to exert directorial control over the output style, such as gesture level, speed, symmetry and spacial extent. Such control can be leveraged to convey a desired character personality or mood. We achieve all this without any manual annotation of the data. User studies evaluating upper-body gesticulation confirm that the generated motions are natural and well match the input speech. Our method scores above all prior systems and baselines on these measures, and comes close to the ratings of the original recorded motions. We furthermore find that we can accurately control gesticulation styles without unnecessarily compromising perceived naturalness. Finally, we also demonstrate an application of the same method to full-body gesticulation, including the synthesis of stepping motion and stance.Item Synthesizing Character Animation with Smoothly Decomposed Motion Layers(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Eom, Haegwang; Choi, Byungkuk; Cho, Kyungmin; Jung, Sunjin; Hong, Seokpyo; Noh, Junyong; Benes, Bedrich and Hauser, HelwigThe processing of captured motion is an essential task for undertaking the synthesis of high‐quality character animation. The motion decomposition techniques investigated in prior work extract meaningful motion primitives that help to facilitate this process. Carefully selected motion primitives can play a major role in various motion‐synthesis tasks, such as interpolation, blending, warping, editing or the generation of new motions. Unfortunately, for a complex character motion, finding generic motion primitives by decomposition is an intractable problem due to the compound nature of the behaviours of such characters. Additionally, decomposed motion primitives tend to be too limited for the chosen model to cover a broad range of motion‐synthesis tasks. To address these challenges, we propose a generative motion decomposition framework in which the decomposed motion primitives are applicable to a wide range of motion‐synthesis tasks. Technically, the input motion is smoothly decomposed into three motion layers. These are base‐level motion, a layer with controllable motion displacements and a layer with high‐frequency residuals. The final motion can easily be synthesized simply by changing a single user parameter that is linked to the layer of controllable motion displacements or by imposing suitable temporal correspondences to the decomposition framework. Our experiments show that this decomposition provides a great deal of flexibility in several motion synthesis scenarios: denoising, style modulation, upsampling and time warping.