Show simple item record

dc.contributor.authorLe, Hoangen_US
dc.contributor.authorLiu, Fengen_US
dc.contributor.editorLee, Jehee and Theobalt, Christian and Wetzstein, Gordonen_US
dc.description.abstractNovel view synthesis from sparse and unstructured input views faces challenges like the difficulty with dense 3D reconstruction and large occlusion. This paper addresses these problems by estimating proper appearance flows from the target to input views to warp and blend the input views. Our method first estimates a sparse set 3D scene points using an off-the-shelf 3D reconstruction method and calculates sparse flows from the target to input views. Our method then performs appearance flow completion to estimate the dense flows from the corresponding sparse ones. Specifically, we design a deep fully convolutional neural network that takes sparse flows and input views as input and outputs the dense flows. Furthermore, we estimate the optical flows between input views as references to guide the estimation of dense flows between the target view and input views. Besides the dense flows, our network also estimates the masks to blend multiple warped inputs to render the target view. Experiments on the KITTI benchmark show that our method can generate high quality novel views from sparse and unstructured input views.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.titleAppearance Flow Completion for Novel View Synthesisen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersImage Based Rendering

Files in this item


This item appears in the following Collection(s)

  • 38-Issue 7
    Pacific Graphics 2019 - Symposium Proceedings

Show simple item record