2021
Permanent URI for this collection
Browse
Browsing 2021 by Subject "Facial Performance Capture"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Data-Driven Face Analysis for Performance Retargeting(ETH Zurich, 2022-05-25) Zoss, GaspardThe democratization of digital humans in entertainment was made possible by the recent advances in performance capture, rendering and animation techniques. The human face, which is key to realism, is very complex to animate by hand and facial performance capture is nowadays often used to acquire a starting point for the animation. Most of the time however, captured actors are not re-rendered directly on screen, but their performance is retargeted to other characters or fantasy creatures. The task of retargeting facial performances brings forth multiple challenging questions, such as how does one map the performance of an actor to another, how should the data be represented to optimally do so, and how does one maintain artistic control while doing so, to only cite a few. These challenges make facial performance retargeting an active and exciting area of research. In this dissertation, we present several contributions towards solving the retargeting problem. We first introduce a novel jaw rig, designed using ground truth jaw motion data acquired with a novel capture method specifically designed for this task. Our jaw rig allows for direct and indirect controls while restricting the motion of the mandible to only physiologically possible poses. We use a well-known concept from dentistry, the Posselt envelope of motion, to parameterize its controls. Finally, we show how this jaw rig can be retargeted to unseen actors or creatures. Our second contribution is a novel markerless method to accurately track the underlying jaw bone. We use our jaw motion capture method to capture a dataset of ground truth jaw motion and geometry and learn a non-linear mapping between the facial skin deformation and the motion of the underlying bone. We also demonstrate how this method can be used on actors for which no ground truth jaw motion is acquired, outperforming the currently used techniques. In most of the modern performance capture methods, the facial geometry will inevitably contain parasitic dynamic motion which are, most of the time, undesired. This is specially true in the context of performance retargeting. Our third contribution aims to compute and characterize the difference between the captured dynamic facial performance, and a speculative quasistatic variant of the same motion, should the inertial effects have been absent. We show how our method can be used to remove secondary dynamics from a captured performance and synthesize novel dynamics, given novel head motion. Our last contribution tackles a different kind of retargeting problem; the problem of re-aging of facial performances in image space. In contrast to existing method, we specifically tackle the problem of high-resolution temporally stable re-aging. We show how a synthetic dataset can be computed using a state-of-the-art generative adversarial network and used to train our re-aging network. Our method allows fine-grained continuous age control and intuitive artistic effects such as localized control. We believe the methods presented in this thesis will solve or alleviate some of the problems in modern performance retargeting and will inspire exciting future work.