39-Issue 8
Permanent URI for this collection
Browse
Browsing 39-Issue 8 by Subject "Motion processing"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Intuitive Facial Animation Editing Based On A Generative RNN Framework(The Eurographics Association and John Wiley & Sons Ltd., 2020) Berson, Eloïse; Soladié, Catherine; Stoiber, Nicolas; Bender, Jan and Popa, TiberiuFor the last decades, the concern of producing convincing facial animation has garnered great interest, that has only been accelerating with the recent explosion of 3D content in both entertainment and professional activities. The use of motion capture and retargeting has arguably become the dominant solution to address this demand. Yet, despite high level of quality and automation performance-based animation pipelines still require manual cleaning and editing to refine raw results, which is a time- and skill-demanding process. In this paper, we look to leverage machine learning to make facial animation editing faster and more accessible to non-experts. Inspired by recent image inpainting methods, we design a generative recurrent neural network that generates realistic motion into designated segments of an existing facial animation, optionally following userprovided guiding constraints. Our system handles different supervised or unsupervised editing scenarios such as motion filling during occlusions, expression corrections, semantic content modifications, and noise filtering. We demonstrate the usability of our system on several animation editing use cases.Item Statistics-based Motion Synthesis for Social Conversations(The Eurographics Association and John Wiley & Sons Ltd., 2020) Yang, Yanzhe; Yang, Jimei; Hodgins, Jessica; Bender, Jan and Popa, TiberiuPlausible conversations among characters are required to generate the ambiance of social settings such as a restaurant, hotel lobby, or cocktail party. In this paper, we propose a motion synthesis technique that can rapidly generate animated motion for characters engaged in two-party conversations. Our system synthesizes gestures and other body motions for dyadic conversations that synchronize with novel input audio clips. Human conversations feature many different forms of coordination and synchronization. For example, speakers use hand gestures to emphasize important points, and listeners often nod in agreement or acknowledgment. To achieve the desired degree of realism, our method first constructs a motion graph that preserves the statistics of a database of recorded conversations performed by a pair of actors. This graph is then used to search for a motion sequence that respects three forms of audio-motion coordination in human conversations: coordination to phonemic clause, listener response, and partner's hesitation pause. We assess the quality of the generated animations through a user study that compares them to the originally recorded motion and evaluate the effects of each type of audio-motion coordination via ablation studies.