Browsing by Author "Yan, Qingan"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item CLA-GAN: A Context and Lightness Aware Generative Adversarial Network for Shadow Removal(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Ling; Long, Chengjiang; Yan, Qingan; Zhang, Xiaolong; Xiao, Chunxia; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueIn this paper, we propose a novel context and lightness aware Generative Adversarial Network (CLA-GAN) framework for shadow removal, which refines a coarse result to a final shadow removal result in a coarse-to-fine fashion. At the refinement stage, we first obtain a lightness map using an encoder-decoder structure. With the lightness map and the coarse result as the inputs, the following encoder-decoder tries to refine the final result. Specifically, different from current methods restricted pixel-based features from shadow images, we embed a context-aware module into the refinement stage, which exploits patch-based features. The embedded module transfers features from non-shadow regions to shadow regions to ensure the consistency in appearance in the recovered shadow-free images. Since we consider pathces, the module can additionally enhance the spatial association and continuity around neighboring pixels. To make the model pay more attention to shadow regions during training, we use dynamic weights in the loss function. Moreover, we augment the inputs of the discriminator by rotating images in different degrees and use rotation adversarial loss during training, which can make the discriminator more stable and robust. Extensive experiments demonstrate the validity of the components in our CLA-GAN framework. Quantitative evaluation on different shadow datasets clearly shows the advantages of our CLA-GAN over the state-of-the-art methods.Item Pyramid Multi-View Stereo with Local Consistency(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liao, Jie; Fu, Yanping; Yan, Qingan; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn this paper, we propose a PatchMatch-based Multi-View Stereo (MVS) algorithm which can efficiently estimate geometry for the textureless area. Conventional PatchMatch-based MVS algorithms estimate depth and normal hypotheses mainly by optimizing photometric consistency metrics between patch in the reference image and its projection on other images. The photometric consistency works well in textured regions but can not discriminate textureless regions, which makes geometry estimation for textureless regions hard work. To address this issue, we introduce the local consistency. Based on the assumption that neighboring pixels with similar colors likely belong to the same surface and share approximate depth-normal values, local consistency guides the depth and normal estimation with geometry from neighboring pixels with similar colors. To fasten the convergence of pixelwise local consistency across the image, we further introduce a pyramid architecture similar to previous work which can also provide coarse estimation at upper levels. We validate the effectiveness of our method on the ETH3D benchmark and Tanks and Temples benchmark. Results show that our method outperforms the state-of-the-art.Item Wavelet Flow: Optical Flow Guided Wavelet Facial Image Fusion(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ding, Hong; Yan, Qingan; Fu, Gang; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonEstimating the correspondence between the images using optical flow is the key component for image fusion, however, computing optical flow between a pair of facial images including backgrounds is challenging due to large differences in illumination, texture, color and background in the images. To improve optical flow results for image fusion, we propose a novel flow estimation method, wavelet flow, which can handle both the face and background in the input images. The key idea is that instead of computing flow directly between the input image pair, we estimate the image flow by incorporating multi-scale image transfer and optical flow guided wavelet fusion. Multi-scale image transfer helps to preserve the background and lighting detail of input, while optical flow guided wavelet fusion produces a series of intermediate images for further fusion quality optimizing. Our approach can significantly improve the performance of the optical flow algorithm and provide more natural fusion results for both faces and backgrounds in the images. We evaluate our method on a variety of datasets to show its high outperformance.