39-Issue 1
Permanent URI for this collection
Browse
Browsing 39-Issue 1 by Subject "animation"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Accelerating Distributed Graphical Fluid Simulations with Micro‐partitioning(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Qu, Hang; Mashayekhi, Omid; Shah, Chinmayee; Levis, Philip; Benes, Bedrich and Hauser, HelwigGraphical fluid simulations are CPU‐bound. Parallelizing simulations on hundreds of cores in the computing cloud would make them faster, but requires evenly balancing load across nodes. Good load balancing depends on manual decisions from experts, which are time‐consuming and error prone, or dynamic approaches that estimate and react to future load, which are non‐deterministic and hard to debug.This paper proposes Birdshot scheduling, an automatic and purely static load balancing algorithm whose performance is close to expert decisions and reactive algorithms without their difficulty or complexity. Birdshot scheduling's key insight is to leverage the high‐latency, high‐throughput, full bisection bandwidth of cloud computing nodes. Birdshot scheduling splits the simulation domain into many micro‐partitions and statically assigns them to nodes randomly. Analytical results show that randomly assigned micro‐partitions balance load with high probability. The high‐throughput network easily handles the increased data transfers from micro‐partitions, and full bisection bandwidth allows random placement with no performance penalty. Overlapping the communications and computations of different micro‐partitions masks latency.Experiments with particle‐level set, SPH, FLIP and explicit Eulerian methods show that Birdshot scheduling speeds up simulations by a factor of 2‐3, and can out‐perform reactive scheduling algorithms. Birdshot scheduling performs within 21% of state‐of‐the‐art dynamic methods that require running a second, parallel simulation. Unlike speculative algorithms, Birdshot scheduling is purely static: it requires no controller, runtime data collection, partition migration or support for these operations from the programmer.Item Detection and Synthesis of Full‐Body Environment Interactions for Virtual Humans(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Juarez‐Perez, A.; Kallmann, M.; Benes, Bedrich and Hauser, HelwigWe present a new methodology for enabling virtual humans to autonomously detect and perform complex full‐body interactions with their environments. Given a parameterized walking controller and a set of motion‐captured example interactions, our method is able to detect when interactions can occur and to coordinate the detected upper‐body interaction with the walking controller in order to achieve full‐body mobile interactions in similar situations. Our approach is based on learning spatial coordination features from the example motions and on associating body‐environment proximity information to the body configurations of each performed action. Body configurations become the input to a regression system, which in turn is able to generate new interactions for different situations in similar environments. The regression model is capable of selecting, encoding and replicating key spatial strategies with respect to body coordination and management of environment constraints as well as determining the correct moment in time and space for starting an interaction. As a result, we obtain an interactive controller able to detect and synthesize coordinated full‐body motions for a variety of complex interactions requiring body mobility. Our results achieve complex interactions, such as opening doors and drawing in a wide whiteboard. The presented approach introduces the concept of learning interaction coordination models that can be applied on top of any given walking controller. The obtained method is simple and flexible, it handles the detection of possible interactions and is suitable for real‐time applications.Item The Matchstick Model for Anisotropic Friction Cones(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Erleben, K.; Macklin, M.; Andrews, S.; Kry, P. G.; Benes, Bedrich and Hauser, HelwigInspired by frictional behaviour that is observed when sliding matchsticks against one another at different angles, we propose a phenomenological anisotropic friction model for structured surfaces. Our model interpolates isotropic and anisotropic elliptical Coulomb friction parameters for a pair of surfaces with perpendicular and parallel structure directions (e.g. the wood grain direction). We view our model as a special case of an abstract friction model that produces a cone based on state information, specifically the relationship between structure directions. We show how our model can be integrated into LCP and NCP‐based simulators using different solvers with both explicit and fully implicit time‐integration. The focus of our work is on symmetric friction cones, and we therefore demonstrate a variety of simulation scenarios where the friction structure directions play an important part in the resulting motions. Consequently, authoring of friction using our model is intuitive and we demonstrate that our model is compatible with standard authoring practices, such as texture mapping.Item Muscle and Fascia Simulation with Extended Position Based Dynamics(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Romeo, M.; Monteagudo, C.; Sánchez‐Quirós, D.; Benes, Bedrich and Hauser, HelwigRecent research on muscle and fascia simulation for visual effects relies on numerical methods such as the finite element method or finite volume method. These approaches produce realistic results, but require high computational time and are complex to set up. On the other hand, position‐based dynamics offers a fast and controllable solution to simulate surfaces and volumes, but there is no literature on how to implement constraints that could be used to realistically simulate muscles and fascia for digital creatures with this method. In this paper, we extend the current state‐of‐the‐art in Position‐Based Dynamics to efficiently compute realistic skeletal muscle and superficial fascia simulation. In particular, we embed muscle fibres in the solver by adding an anisotropic component to the distance constraints between mesh points and apply overpressure to realistically model muscle volume changes under contraction. In addition, we also define a modified distance constraint for the fascia that allows compression and enables the user to scale the constraint's original distance to gain elastic potential at rest. Finally, we propose a modification of the extended position‐based dynamics algorithm to properly compute different sets of constraints and describe other details for proper simulation of character's muscle and fascia dynamics.Item RAS: A Data‐Driven Rigidity‐Aware Skinning Model For 3D Facial Animation(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Liu, S‐L.; Liu, Y.; Dong, L‐F.; Tong, X.; Benes, Bedrich and Hauser, HelwigWe present a novel data‐driven skinning model—rigidity‐aware skinning (RAS) model, for simulating both active and passive 3D facial animation of different identities in real time. Our model builds upon a linear blend skinning (LBS) scheme, where the bone set and skinning weights are shared for diverse identities and learned from the data via a sparse and localized skinning decomposition algorithm. Our model characterizes the animated face into the active expression and the passive deformation: The former is represented by an LBS‐based multi‐linear model learned from the FaceWareHouse data set, and the latter is represented by a spatially varying as‐rigid‐as‐possible deformation applied to the LBS‐based multi‐linear model, whose rigidity parameters are learned from the data by a novel rigidity estimation algorithm. Our RAS model is not only generic and expressive for faithfully modelling medium‐scale facial deformation, but also compact and lightweight for generating vivid facial animation in real time. We validate the efficiency and effectiveness of our RAS model for real‐time 3D facial animation and expression editing.Item Simulating the Evolution of Ancient Fortified Cities(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Mas, Albert; Martin, Ignacio; Patow, Gustavo; Benes, Bedrich and Hauser, HelwigAncient cities and castles are ubiquitous cultural heritage structures all over Europe, and countless digital creations (e.g. movies and games) use them for storytelling. However, they got little or no attention in the computer graphics literature. This paper aims to close the gap between historical and geometrical modelling, by presenting a framework that allows the forward and inverse design of ancient city (e.g. castles and walled cities) evolution along history. The main component is an interactive loop that cycles over a number of years simulating the evolution of a city. The user can define events, such as battles, city growth, wall creations or expansions, or any other historical event. Firstly, cities (or castles) and their walls are created, and, later on, expanded to encompass civil or strategic facilities to protect. In our framework, battle simulations are used to detect weaknesses and strengthen them, evolving to accommodate to developments in offensive weaponry. We conducted both forward and inverse design tests on three different scenarios: the city of Carcassone (France), the city of Gerunda (Spain) and the Ciutadella in ancient Barcelona. All the results have been validated by historians who helped fine‐tune the different parameters involved in the simulations.Item Synthesizing Character Animation with Smoothly Decomposed Motion Layers(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Eom, Haegwang; Choi, Byungkuk; Cho, Kyungmin; Jung, Sunjin; Hong, Seokpyo; Noh, Junyong; Benes, Bedrich and Hauser, HelwigThe processing of captured motion is an essential task for undertaking the synthesis of high‐quality character animation. The motion decomposition techniques investigated in prior work extract meaningful motion primitives that help to facilitate this process. Carefully selected motion primitives can play a major role in various motion‐synthesis tasks, such as interpolation, blending, warping, editing or the generation of new motions. Unfortunately, for a complex character motion, finding generic motion primitives by decomposition is an intractable problem due to the compound nature of the behaviours of such characters. Additionally, decomposed motion primitives tend to be too limited for the chosen model to cover a broad range of motion‐synthesis tasks. To address these challenges, we propose a generative motion decomposition framework in which the decomposed motion primitives are applicable to a wide range of motion‐synthesis tasks. Technically, the input motion is smoothly decomposed into three motion layers. These are base‐level motion, a layer with controllable motion displacements and a layer with high‐frequency residuals. The final motion can easily be synthesized simply by changing a single user parameter that is linked to the layer of controllable motion displacements or by imposing suitable temporal correspondences to the decomposition framework. Our experiments show that this decomposition provides a great deal of flexibility in several motion synthesis scenarios: denoising, style modulation, upsampling and time warping.