38-Issue 2
Permanent URI for this collection
Browse
Browsing 38-Issue 2 by Subject "based rendering"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Deep Video-Based Performance Cloning(The Eurographics Association and John Wiley & Sons Ltd., 2019) Aberman, Kfir; Shi, Mingyi; Liao, Jing; Lischinski, Dani; Chen, Baoquan; Cohen-Or, Daniel; Alliez, Pierre and Pellacini, FabioWe present a new video-based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space-time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses paired training data, self-generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. Through data augmentation, our network is able to synthesize images of the target actor in poses never captured by the reference video. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances.Item Neural BTF Compression and Interpolation(The Eurographics Association and John Wiley & Sons Ltd., 2019) Rainer, Gilles; Jakob, Wenzel; Ghosh, Abhijeet; Weyrich, Tim; Alliez, Pierre and Pellacini, FabioThe Bidirectional Texture Function (BTF) is a data-driven solution to render materials with complex appearance. A typical capture contains tens of thousands of images of a material sample under varying viewing and lighting conditions.While capable of faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, both for storing and rendering, making effective compression of BTF data a critical component in practical applications. Common compression schemes used in practice are based on matrix factorization techniques, which preserve the discrete format of the original dataset. While this approach generalizes well to different materials, rendering with the compressed dataset still relies on interpolating between the closest samples. Depending on the material and the angular resolution of the BTF, this can lead to blurring and ghosting artefacts. An alternative approach uses analytic model fitting to approximate the BTF data, using continuous functions that naturally interpolate well, but whose expressive range is often not wide enough to faithfully recreate materials with complex non-local lighting effects (subsurface scattering, inter-reflections, shadowing and masking...). In light of these observations, we propose a neural network-based BTF representation inspired by autoencoders: our encoder compresses each texel to a small set of latent coefficients, while our decoder additionally takes in a light and view direction and outputs a single RGB vector at a time. This allows us to continuously query reflectance values in the light and view hemispheres, eliminating the need for linear interpolation between discrete samples. We train our architecture on fabric BTFs with a challenging appearance and compare to standard PCA as a baseline. We achieve competitive compression ratios and high-quality interpolation/extrapolation without blurring or ghosting artifacts.