Monocular Facial Performance Capture Via Deep Expression Matching
dc.contributor.author | Bailey, Stephen W. | en_US |
dc.contributor.author | Riviere, Jérémy | en_US |
dc.contributor.author | Mikkelsen, Morten | en_US |
dc.contributor.author | O'Brien, James F. | en_US |
dc.contributor.editor | Dominik L. Michels | en_US |
dc.contributor.editor | Soeren Pirk | en_US |
dc.date.accessioned | 2022-08-10T15:19:53Z | |
dc.date.available | 2022-08-10T15:19:53Z | |
dc.date.issued | 2022 | |
dc.description.abstract | Facial performance capture is the process of automatically animating a digital face according to a captured performance of an actor. Recent developments in this area have focused on high-quality results using expensive head-scanning equipment and camera rigs. These methods produce impressive animations that accurately capture subtle details in an actor's performance. However, these methods are accessible only to content creators with relatively large budgets. Current methods using inexpensive recording equipment generally produce lower quality output that is unsuitable for many applications. In this paper, we present a facial performance capture method that does not require facial scans and instead animates an artist-created model using standard blendshapes. Furthermore, our method gives artists high-level control over animations through a workflow similar to existing commercial solutions. Given a recording, our approach matches keyframes of the video with corresponding expressions from an animated library of poses. A Gaussian process model then computes the full animation by interpolating from the set of matched keyframes. Our expression-matching method computes a low-dimensional latent code from an image that represents a facial expression while factoring out the facial identity. Images depicting similar facial expressions are identified by their proximity in the latent space. In our results, we demonstrate the fidelity of our expression-matching method. We also compare animations generated with our approach to animations generated with commercially available software. | en_US |
dc.description.number | 8 | |
dc.description.sectionheaders | Capture, Tracking, and Facial Animation | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 41 | |
dc.identifier.doi | 10.1111/cgf.14639 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 243-254 | |
dc.identifier.pages | 12 pages | |
dc.identifier.uri | https://doi.org/10.1111/cgf.14639 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf14639 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | CCS Concepts: Computing methodologies --> Animation; Neural networks | |
dc.subject | Computing methodologies | |
dc.subject | Animation | |
dc.subject | Neural networks | |
dc.title | Monocular Facial Performance Capture Via Deep Expression Matching | en_US |