Browsing by Author "Herrmann, Erik"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Motion Data and Model Management for Applied Statistical Motion Synthesis(The Eurographics Association, 2019) Herrmann, Erik; Du, Han; Antakli, André; Rubinstein, Dmitri; Schubotz, René; Sprenger, Janis; Hosseini, Somayeh; Cheema, Noshaba; Zinnikus, Ingo; Manns, Martin; Fischer, Klaus; Slusallek, Philipp; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroMachine learning based motion modelling methods such as statistical modelling require a large amount of input data. In practice, the management of the data can become a problem in itself for artists who want to control the quality of the motion models. As a solution to this problem, we present a motion data and model management system and integrate it with a statistical motion modelling pipeline. The system is based on a data storage server with a REST interface that enables the efficient storage of different versions of motion data and models. The database system is combined with a motion preprocessing tool that provides functions for batch editing, retargeting and annotation of the data. For the application of the motion models in a game engine, the framework provides a stateful motion synthesis server that can load the models directly from the data storage server. Additionally, the framework makes use of a Kubernetes compute cluster to execute time consuming processes such as the preprocessing and modelling of the data. The system is evaluated in a use case for the simulation of manual assembly workers.Item Stylistic Locomotion Modeling with Conditional Variational Autoencoder(The Eurographics Association, 2019) Du, Han; Herrmann, Erik; Sprenger, Janis; Cheema, Noshaba; hosseini, somayeh; Fischer, Klaus; Slusallek, Philipp; Cignoni, Paolo and Miguel, EderWe propose a novel approach to create generative models for distinctive stylistic locomotion synthesis. The approach is inspired by the observation that human styles can be easily distinguished from a few examples. However, learning a generative model for natural human motions which display huge amounts of variations and randomness would require a lot of training data. Furthermore, it would require considerable efforts to create such a large motion database for each style. We propose a generative model to combine the large variation in a neutral motion database and style information from a limited number of examples. We formulate the stylistic motion modeling task as a conditional distribution learning problem. Style transfer is implicitly applied during the model learning process. A conditional variational autoencoder (CVAE) is applied to learn the distribution and stylistic examples are used as constraints. We demonstrate that our approach can generate any number of natural-looking human motions with a similar style to the target given a few style examples and a neutral motion database.