EG 2025 - Short Papers
Permanent URI for this collection
Browse
Browsing EG 2025 - Short Papers by Subject "Animation"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Neural Facial Deformation Transfer(The Eurographics Association, 2025) Chandran, Prashanth; Ciccone, Loïc; Zoss, Gaspard; Bradley, Derek; Ceylan, Duygu; Li, Tzu-MaoWe address the practical problem of generating facial blendshapes and reference animations for a new 3D character in production environments where blendshape expressions and reference animations are readily available on a pre-defined template character. We propose Neural Facial Deformation Transfer (NFDT); a data-driven approach to transfer facial expressions from such a template character to new target characters given only the target's neutral shape. To accomplish this, we first present a simple data generation strategy to automatically create a large training dataset consisting of pairs of template and target character shapes in the same expression. We then leverage this dataset through a decoder-only transformer that transfers facial expressions from the template character to a target character in high fidelity. Through quantitative evaluations and a user study, we demonstrate that NFDT surpasses the previous state-of-the-art in facial expression transfer. NFDT provides good results across varying mesh topologies, generalizes to humanoid creatures, and can save time and cost in facial animation workflows.Item Personalized Visual Dubbing through Virtual Dubber and Full Head Reenactment(The Eurographics Association, 2025) Jeon, Bobae; Paquette, Eric; Mudur, Sudhir; Popa, Tiberiu; Ceylan, Duygu; Li, Tzu-MaoVisual dubbing aims to modify facial expressions to ''lip-sync'' a new audio track. While person-generic talking head generation methods achieve expressive lip synchronization across arbitrary identities, they usually lack person-specific details and fail to generate high-quality results. Conversely, person-specific methods require extensive training. Our method combines the strengths of both methods by incorporating a virtual dubber, a person-generic talking head, as an intermediate representation. We then employ an autoencoder-based person-specific identity swapping network to transfer the actor identity, enabling fullhead reenactment that includes hair, face, ears, and neck. This eliminates artifacts while ensuring temporal consistency. Our quantitative and qualitative evaluation demonstrate that our method achieves a superior balance between lip-sync accuracy and realistic facial reenactment.