Browsing by Author "Liao, Jing"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Deep Portrait Lighting Enhancement with 3D Guidance(The Eurographics Association and John Wiley & Sons Ltd., 2021) Han, Fangzhou; Wang, Can; Du, Hao; Liao, Jing; Bousseau, Adrien and McGuire, MorganDespite recent breakthroughs in deep learning methods for image lighting enhancement, they are inferior when applied to portraits because 3D facial information is ignored in their models. To address this, we present a novel deep learning framework for portrait lighting enhancement based on 3D facial guidance. Our framework consists of two stages. In the first stage, corrected lighting parameters are predicted by a network from the input bad lighting image, with the assistance of a 3D morphable model and a differentiable renderer. Given the predicted lighting parameter, the differentiable renderer renders a face image with corrected shading and texture, which serves as the 3D guidance for learning image lighting enhancement in the second stage. To better exploit the long-range correlations between the input and the guidance, in the second stage, we design an imageto- image translation network with a novel transformer architecture, which automatically produces a lighting-enhanced result. Experimental results on the FFHQ dataset and in-the-wild images show that the proposed method outperforms state-of-the-art methods in terms of both quantitative metrics and visual quality.Item Deep Video-Based Performance Cloning(The Eurographics Association and John Wiley & Sons Ltd., 2019) Aberman, Kfir; Shi, Mingyi; Liao, Jing; Lischinski, Dani; Chen, Baoquan; Cohen-Or, Daniel; Alliez, Pierre and Pellacini, FabioWe present a new video-based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space-time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses paired training data, self-generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. Through data augmentation, our network is able to synthesize images of the target actor in poses never captured by the reference video. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances.