An Energy-Conserving Hair Shading Model Based on Neural Style Transfer
dc.contributor.author | Qiao, Zhi | en_US |
dc.contributor.author | Kanai, Takashi | en_US |
dc.contributor.editor | Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, Burkhard | en_US |
dc.date.accessioned | 2020-10-29T18:39:32Z | |
dc.date.available | 2020-10-29T18:39:32Z | |
dc.date.issued | 2020 | |
dc.description.abstract | We present a novel approach for shading photorealistic hair animation, which is the essential visual element for depicting realistic hairs of virtual characters. Our model is able to shade high-quality hairs quickly by extending the conditional Generative Adversarial Networks. Furthermore, our method is much faster than the previous onerous rendering algorithms and produces fewer artifacts than other neural image translation methods. In this work, we provide a novel energy-conserving hair shading model, which retains the vast majority of semi-transparent appearances and exactly produces the interaction with lights of the scene. Our method is effortless to implement, faster and computationally more efficient than previous algorithms. | en_US |
dc.description.sectionheaders | Rendering | |
dc.description.seriesinformation | Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers | |
dc.identifier.doi | 10.2312/pg.20201222 | |
dc.identifier.isbn | 978-3-03868-120-5 | |
dc.identifier.pages | 1-6 | |
dc.identifier.uri | https://doi.org/10.2312/pg.20201222 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/pg20201222 | |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Computing methodologies | |
dc.subject | Image based rendering | |
dc.subject | Neural networks | |
dc.title | An Energy-Conserving Hair Shading Model Based on Neural Style Transfer | en_US |