Show simple item record

dc.contributor.authorTajima, Daichien_US
dc.contributor.authorKanamori, Yoshihiroen_US
dc.contributor.authorEndo, Yukien_US
dc.contributor.editorZhang, Fang-Lue and Eisemann, Elmar and Singh, Karanen_US
dc.description.abstractThe modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the human relighting techniques suffer from the poor generalization capability and synthetic-to-real domain gap. In this paper, we propose a two-stage method for single-image human relighting with domain adaptation. In the first stage, we train a neural network for diffuse-only relighting. In the second stage, we train another network for enhancing non-diffuse reflection by learning residuals between real photos and images reconstructed by the diffuse-only network. Thanks to the second stage, we can achieve higher generalization capability against various cloth textures, while reducing the domain gap. Furthermore, to handle input videos, we integrate illumination-aware deep video prior to greatly reduce flickering artifacts even with challenging settings under dynamic illuminations.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectComputing methodologies
dc.subjectImage manipulation
dc.subjectNeural networks
dc.titleRelighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptationen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersImage Synthesis and Enhancement

Files in this item


This item appears in the following Collection(s)

  • 40-Issue 7
    Pacific Graphics 2021 - Symposium Proceedings

Show simple item record