Browsing by Author "Sprenger, Janis"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks(The Eurographics Association, 2019) Cheema, Noshaba; hosseini, somayeh; Sprenger, Janis; Herrmann, Erik; Du, Han; Fischer, Klaus; Slusallek, Philipp; Cignoni, Paolo and Miguel, EderHuman motion capture data has been widely used in data-driven character animation. In order to generate realistic, naturallooking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ''motion image'' and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process.Item Motion Data and Model Management for Applied Statistical Motion Synthesis(The Eurographics Association, 2019) Herrmann, Erik; Du, Han; Antakli, André; Rubinstein, Dmitri; Schubotz, René; Sprenger, Janis; Hosseini, Somayeh; Cheema, Noshaba; Zinnikus, Ingo; Manns, Martin; Fischer, Klaus; Slusallek, Philipp; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroMachine learning based motion modelling methods such as statistical modelling require a large amount of input data. In practice, the management of the data can become a problem in itself for artists who want to control the quality of the motion models. As a solution to this problem, we present a motion data and model management system and integrate it with a statistical motion modelling pipeline. The system is based on a data storage server with a REST interface that enables the efficient storage of different versions of motion data and models. The database system is combined with a motion preprocessing tool that provides functions for batch editing, retargeting and annotation of the data. For the application of the motion models in a game engine, the framework provides a stateful motion synthesis server that can load the models directly from the data storage server. Additionally, the framework makes use of a Kubernetes compute cluster to execute time consuming processes such as the preprocessing and modelling of the data. The system is evaluated in a use case for the simulation of manual assembly workers.Item Stylistic Locomotion Modeling with Conditional Variational Autoencoder(The Eurographics Association, 2019) Du, Han; Herrmann, Erik; Sprenger, Janis; Cheema, Noshaba; hosseini, somayeh; Fischer, Klaus; Slusallek, Philipp; Cignoni, Paolo and Miguel, EderWe propose a novel approach to create generative models for distinctive stylistic locomotion synthesis. The approach is inspired by the observation that human styles can be easily distinguished from a few examples. However, learning a generative model for natural human motions which display huge amounts of variations and randomness would require a lot of training data. Furthermore, it would require considerable efforts to create such a large motion database for each style. We propose a generative model to combine the large variation in a neutral motion database and style information from a limited number of examples. We formulate the stylistic motion modeling task as a conditional distribution learning problem. Style transfer is implicitly applied during the model learning process. A conditional variational autoencoder (CVAE) is applied to learn the distribution and stylistic examples are used as constraints. We demonstrate that our approach can generate any number of natural-looking human motions with a similar style to the target given a few style examples and a neutral motion database.