Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Chandran, Prashanth"

Now showing 1 - 10 of 10
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Facial Animation with Disentangled Identity and Motion using Transformers
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chandran, Prashanth; Zoss, Gaspard; Gross, Markus; Gotardo, Paulo; Bradley, Derek; Dominik L. Michels; Soeren Pirk
    We propose a 3D+time framework for modeling dynamic sequences of 3D facial shapes, representing realistic non-rigid motion during a performance. Our work extends neural 3D morphable models by learning a motion manifold using a transformer architecture. More specifically, we derive a novel transformer-based autoencoder that can model and synthesize 3D geometry sequences of arbitrary length. This transformer naturally determines frame-to-frame correlations required to represent the motion manifold, via the internal self-attention mechanism. Furthermore, our method disentangles the constant facial identity from the time-varying facial expressions in a performance, using two separate codes to represent neutral identity and the performance itself within separate latent subspaces. Thus, the model represents identity-agnostic performances that can be paired with an arbitrary new identity code and fed through our new identity-modulated performance decoder; the result is a sequence of 3D meshes for the performance with the desired identity and temporal length. We demonstrate how our disentangled motion model has natural applications in performance synthesis, performance retargeting, key-frame interpolation and completion of missing data, performance denoising and retiming, and other potential applications that include full 3D body modeling.
  • No Thumbnail Available
    Item
    Fast Dynamic Facial Wrinkles
    (The Eurographics Association, 2024) Weiss, Sebastian; Chandran, Prashanth; Zoss, Gaspard; Bradley, Derek; Hu, Ruizhen; Charalambous, Panayiotis
    We present a new method to animate the dynamic motion of skin micro wrinkles under facial expression deformation. Since wrinkles are formed as a reservoir of skin for stretching, our model only deforms wrinkles that are perpendicular to the stress axis. Specifically, those wrinkles become wider and shallower when stretched, and deeper and narrower when compressed. In contrast to previous methods that attempted to modify the neutral wrinkle displacement map, our approach is to modify the way wrinkles are constructed in the displacement map. To this end, we build upon a previous synthetic wrinkle generator that allows us to control the width and depth of individual wrinkles when generated on a per-frame basis. Furthermore, since constructing a displacement map per frame of animation is costly, we present a fast approximation approach using pre-computed displacement maps of wrinkles binned by stretch direction, which can be blended interactively in a shader. We compare both our high quality and fast methods with previous techniques for wrinkle animation and demonstrate that our work retains more realistic details.
  • Loading...
    Thumbnail Image
    Item
    Graph-Based Synthesis for Skin Micro Wrinkles
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Weiss, Sebastian; Moulin, Jonathan; Chandran, Prashanth; Zoss, Gaspard; Gotardo, Paulo; Bradley, Derek; Memari, Pooran; Solomon, Justin
    We present a novel graph-based simulation approach for generating micro wrinkle geometry on human skin, which can easily scale up to the micro-meter range and millions of wrinkles. The simulation first samples pores on the skin and treats them as nodes in a graph. These nodes are then connected and the resulting edges become candidate wrinkles. An iterative optimization inspired by pedestrian trail formation is then used to assign weights to those edges, i.e., to carve out the wrinkles. Finally, we convert the graph to a detailed skin displacement map using novel shape functions implemented in graphics shaders. Our simulation and displacement map creation steps expose fine controls over the appearance at real-time framerates suitable for interactive exploration and design. We demonstrate the effectiveness of the generated wrinkles by enhancing state-of-art 3D reconstructions of real human subjects with simulated micro wrinkles, and furthermore propose an artist-driven design flow for adding micro wrinkles to fictional characters.
  • Loading...
    Thumbnail Image
    Item
    Improved Lighting Models for Facial Appearance Capture
    (The Eurographics Association, 2022) Xu, Yingyan; Riviere, Jérémy; Zoss, Gaspard; Chandran, Prashanth; Bradley, Derek; Gotardo, Paulo; Pelechano, Nuria; Vanderhaeghe, David
    Facial appearance capture techniques estimate geometry and reflectance properties of facial skin by performing a computationally intensive inverse rendering optimization in which one or more images are re-rendered a large number of times and compared to real images coming from multiple cameras. Due to the high computational burden, these techniques often make several simplifying assumptions to tame complexity and make the problem more tractable. For example, it is common to assume that the scene consists of only distant light sources, and ignore indirect bounces of light (on the surface and within the surface). Also, methods based on polarized lighting often simplify the light interaction with the surface and assume perfect separation of diffuse and specular reflectance. In this paper, we move in the opposite direction and demonstrate the impact on facial appearance capture quality when departing from these idealized conditions towards models that seek to more accurately represent the lighting, while at the same time minimally increasing computational burden. We compare the results obtained with a state-of-the-art appearance capture method [RGB*20], with and without our proposed improvements to the lighting model.
  • Loading...
    Thumbnail Image
    Item
    Learning Dynamic 3D Geometry and Texture for Video Face Swapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Otto, Christopher; Naruniec, Jacek; Helminger, Leonhard; Etterlin, Thomas; Mignone, Graziana; Chandran, Prashanth; Zoss, Gaspard; Schroers, Christopher; Gross, Markus; Gotardo, Paulo; Bradley, Derek; Weber, Romann; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Face swapping is the process of applying a source actor's appearance to a target actor's performance in a video. This is a challenging visual effect that has seen increasing demand in film and television production. Recent work has shown that datadriven methods based on deep learning can produce compelling effects at production quality in a fraction of the time required for a traditional 3D pipeline. However, the dominant approach operates only on 2D imagery without reference to the underlying facial geometry or texture, resulting in poor generalization under novel viewpoints and little artistic control. Methods that do incorporate geometry rely on pre-learned facial priors that do not adapt well to particular geometric features of the source and target faces. We approach the problem of face swapping from the perspective of learning simultaneous convolutional facial autoencoders for the source and target identities, using a shared encoder network with identity-specific decoders. The key novelty in our approach is that each decoder first lifts the latent code into a 3D representation, comprising a dynamic face texture and a deformable 3D face shape, before projecting this 3D face back onto the input image using a differentiable renderer. The coupled autoencoders are trained only on videos of the source and target identities, without requiring 3D supervision. By leveraging the learned 3D geometry and texture, our method achieves face swapping with higher quality than when using offthe- shelf monocular 3D face reconstruction, and overall lower FID score than state-of-the-art 2D methods. Furthermore, our 3D representation allows for efficient artistic control over the result, which can be hard to achieve with existing 2D approaches.
  • Loading...
    Thumbnail Image
    Item
    Neural Facial Deformation Transfer
    (The Eurographics Association, 2025) Chandran, Prashanth; Ciccone, Loïc; Zoss, Gaspard; Bradley, Derek; Ceylan, Duygu; Li, Tzu-Mao
    We address the practical problem of generating facial blendshapes and reference animations for a new 3D character in production environments where blendshape expressions and reference animations are readily available on a pre-defined template character. We propose Neural Facial Deformation Transfer (NFDT); a data-driven approach to transfer facial expressions from such a template character to new target characters given only the target's neutral shape. To accomplish this, we first present a simple data generation strategy to automatically create a large training dataset consisting of pairs of template and target character shapes in the same expression. We then leverage this dataset through a decoder-only transformer that transfers facial expressions from the template character to a target character in high fidelity. Through quantitative evaluations and a user study, we demonstrate that NFDT surpasses the previous state-of-the-art in facial expression transfer. NFDT provides good results across varying mesh topologies, generalizes to humanoid creatures, and can save time and cost in facial animation workflows.
  • No Thumbnail Available
    Item
    Next Generation 3D Face Models
    (The Eurographics Association, 2024) Chandran, Prashanth; Yang, Lingchen; Mania, Katerina; Artusi, Alessandro
    Having a compact, expressive and artist friendly way to represent and manipulate human faces has been of prime interest to the visual effects community for the past several decades as face models play a very important role in many face capture workflows. In this short course, we go over the evolution of 3D face models used to model and animate facial identity and expression in the computer graphics community, and discuss how the recent emergence of deep face models is transforming this landscape by enabling new artistic choices. In this first installment, the course will take the audience through the evolution of face models, starting with simple blendshape models introduced in the 1980s; that continue to be extremely popular today, to recent deep shape models that utilize neural networks to represent and manipulate face shapes in an artist friendly fashion. As the course is meant to be beginner friendly, the course will commence with a quick introduction to non-neural parametric shape models starting with linear blendshape and morphable models. We will then switch focus to deep shape models, particularly those that offer intuitive control to artists. We will discuss multiple variants of such deep face models that i) allow semantic control, ii) are agnostic to the underlying topology of the manipulated shape, iii) provide the ability to explicitly model a sequence of 3D shapes or animations, and iv) allow for the simulation of physical effects. Applications that will be discussed include face shape synthesis, identity and expression interpolation, rig generation, performance retargeting, animation synthesis and more.
  • Loading...
    Thumbnail Image
    Item
    Next Generation 3D Face Models
    (The Eurographics Association, 2025) Chandran, Prashanth; Mantiuk, Rafal; Hildebrandt, Klaus
    Data driven 3D face models are an important tool for applications like facial animation, face reconstruction and tracking and can serve as a powerful prior for the complex nonrigid deformation of human faces. While linear 3D morphable models or 3DMMs have been traditionally employed by artists to cater to these applications, in the last few years several deep face models have been introduced that make use of neural networks to manipulate face shapes and offer greater flexibility while also retaining the intuitive control of traditional face models. This recent class of semantic deep face models have the potential to simplify existing facial animation workflows and enable artists to make a wider range of creative choices. However, as these neural tools are still very recent and fresh out of academic research, there is a need to start a conversation with artists and industry professionals on how such neural networks can be incorporated into existing workflows. This course aims to take a first step in this direction by providing a gentle introduction to several types of deep face models introduced in recent years by the academia and how each of them resolve several problems encountered in conventional facial animation. The primary intention of the course is to provide artists and industry professionals with an understanding of the state of art in neural 3D face models, and to inspire them to consider how these new tools can be incorporated into existing industry workflows to produce better content faster. The course will also serve the purpose of providing a gentle introduction to face modeling and animation to students looking to get familiar with the field. Experienced participants with a strong background in the field would also be able to identify possible directions for future research. The course will be presented in a lecture format with slides. Concepts from related papers will be explained in enough detail to help the audience make informed decisions on using these tools and understand their current shortcomings.
  • No Thumbnail Available
    Item
    A Perceptual Shape Loss for Monocular 3D Face Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Otto, Christopher; Chandran, Prashanth; Zoss, Gaspard; Gross, Markus; Gotardo, Paulo; Bradley, Derek; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    Monocular 3D face reconstruction is a wide-spread topic, and existing approaches tackle the problem either through fast neural network inference or offline iterative reconstruction of face geometry. In either case carefully-designed energy functions are minimized, commonly including loss terms like a photometric loss, a landmark reprojection loss, and others. In this work we propose a new loss function for monocular face capture, inspired by how humans would perceive the quality of a 3D face reconstruction given a particular image. It is widely known that shading provides a strong indicator for 3D shape in the human visual system. As such, our new 'perceptual' shape loss aims to judge the quality of a 3D face estimate using only shading cues. Our loss is implemented as a discriminator-style neural network that takes an input face image and a shaded render of the geometry estimate, and then predicts a score that perceptually evaluates how well the shaded render matches the given image. This 'critic' network operates on the RGB image and geometry render alone, without requiring an estimate of the albedo or illumination in the scene. Furthermore, our loss operates entirely in image space and is thus agnostic to mesh topology. We show how our new perceptual shape loss can be combined with traditional energy terms for monocular 3D face optimization and deep neural network regression, improving upon current state-of-the-art results.
  • No Thumbnail Available
    Item
    Stylize My Wrinkles: Bridging the Gap from Simulation to Reality
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Weiss, Sebastian; Stanhope, Jackson; Chandran, Prashanth; Zoss, Gaspard; Bradley, Derek; Bermano, Amit H.; Kalogerakis, Evangelos
    Modeling realistic human skin with pores and wrinkles down to the milli- and micrometer resolution is a challenging task. Prior work showed that such micro geometry can be efficiently generated through simulation methods, or in specialized cases via 3D scanning of real skin. Simulation methods allow to highly customize the wrinkles on the face, but can lead to a synthetic look. Scanning methods can lead to a more organic look for the micro details, however these methods are only applicable to small skin patches due to the required image resolution. In this work we aim to overcome the gap between synthetic simulation and real skin scanning, by proposing a method that can be applied to large skin regions (e.g. an entire face) with the controllability of simulation and the organic look of real micro details. Our method is based on style transfer at its core, where we use scanned displacement maps of real skin patches as style images and displacement maps from an artist-friendly simulation method as content images. We build a library of displacement maps as style images by employing a simplified scanning setup that can capture high-resolution patches of real skin. To create the content component for the style transfer and to facilitate parameter-tuning for the simulation, we design a library of preset parameter values depicting different skin types, and present a new method to fit the simulation parameters to scanned skin patches. This allows fully-automatic parameter generation, interpolation and stylization across entire faces. We evaluate our method by generating realistic skin micro details for various subjects of different ages and genders, and demonstrate that our approach achieves a more organic and natural look than simulation alone.

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback