38-Issue 2
Permanent URI for this collection
Browse
Browsing 38-Issue 2 by Subject "Computational photography"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Clear Skies Ahead: Towards Real-Time Automatic Sky Replacement in Video(The Eurographics Association and John Wiley & Sons Ltd., 2019) Halperin, Tavi; Cain, Harel; Bibi, Ofir; Werman, Michael; Alliez, Pierre and Pellacini, FabioDigital videos such as those captured by a smartphone often exhibit exposure inconsistencies, a poorly exposed sky, or simply suffer from an uninteresting or plain looking sky. Professionals may edit these videos using advanced and time-consuming tools unavailable to most users, to replace the sky with a more expressive or imaginative sky. In this work, we propose an algorithm for automatic replacement of the sky region in a video with a different sky, providing nonprofessional users with a simple yet efficient tool to seamlessly replace the sky. The method is fast, achieving close to real-time performance on mobile devices and the user's involvement can remain as limited as simply selecting the replacement sky.Item Controlling Motion Blur in Synthetic Long Time Exposures(The Eurographics Association and John Wiley & Sons Ltd., 2019) Lancelle, Marcel; Dogan, Pelin; Gross, Markus; Alliez, Pierre and Pellacini, FabioIn a photo, motion blur can be used as an artistic style to convey motion and to direct attention. In panning or tracking shots, a moving object of interest is followed by the camera during a relatively long exposure. The goal is to get a blurred background while keeping the object sharp. Unfortunately, it can be difficult to impossible to precisely follow the object. Often, many attempts or specialized physical setups are needed. This paper presents a novel approach to create such images. For capturing, the user is only required to take a casually recorded hand-held video that roughly follows the object. Our algorithm then produces a single image which simulates a stabilized long time exposure. This is achieved by first warping all frames such that the object of interest is aligned to a reference frame. Then, optical flow based frame interpolation is used to reduce ghosting artifacts from temporal undersampling. Finally, the frames are averaged to create the result. As our method avoids segmentation and requires little to no user interaction, even challenging sequences can be processed successfully. In addition, artistic control is available in a number of ways. The effect can also be applied to create videos with an exaggerated motion blur. Results are compared with previous methods and ground truth simulations. The effectiveness of our method is demonstrated by applying it to hundreds of datasets. The most interesting results are shown in the paper and in the supplemental material.Item Deep HDR Video from Sequences with Alternating Exposures(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kalantari, Nima Khademi; Ramamoorthi, Ravi; Alliez, Pierre and Pellacini, FabioA practical way to generate a high dynamic range (HDR) video using off-the-shelf cameras is to capture a sequence with alternating exposures and reconstruct the missing content at each frame. Unfortunately, existing approaches are typically slow and are not able to handle challenging cases. In this paper, we propose a learning-based approach to address this difficult problem. To do this, we use two sequential convolutional neural networks (CNN) to model the entire HDR video reconstruction process. In the first step, we align the neighboring frames to the current frame by estimating the flows between them using a network, which is specifically designed for this application. We then combine the aligned and current images using another CNN to produce the final HDR frame. We perform an end-to-end training by minimizing the error between the reconstructed and ground truth HDR images on a set of training scenes. We produce our training data synthetically from existing HDR video datasets and simulate the imperfections of standard digital cameras using a simple approach. Experimental results demonstrate that our approach produces high-quality HDR videos and is an order of magnitude faster than the state-of-the-art techniques for sequences with two and three alternating exposures.