Browsing by Author "Wu, Baoyuan"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Customized Summarizations of Visual Data Collections(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Yuan, Mengke; Ghanem, Bernard; Yan, Dong‐Ming; Wu, Baoyuan; Zhang, Xiaopeng; Wonka, Peter; Benes, Bedrich and Hauser, HelwigWe propose a framework to generate customized summarizations of visual data collections, such as collections of images, materials, 3D shapes, and 3D scenes. We assume that the elements in the visual data collections can be mapped to a set of vectors in a feature space, in which a fitness score for each element can be defined, and we pose the problem of customized summarizations as selecting a subset of these elements. We first describe the design choices a user should be able to specify for modeling customized summarizations and propose a corresponding user interface. We then formulate the problem as a constrained optimization problem with binary variables and propose a practical and fast algorithm based on the alternating direction method of multipliers (ADMM). Our results show that our problem formulation enables a wide variety of customized summarizations, and that our solver is both significantly faster than state‐of‐the‐art commercial integer programming solvers and produces better solutions than fast relaxation‐based solvers.Item Pixel-wise Dense Detector for Image Inpainting(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Ruisong; Quan, Weize; Wu, Baoyuan; Li, Zhifeng; Yan, Dong-Ming; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueRecent GAN-based image inpainting approaches adopt an average strategy to discriminate the generated image and output a scalar, which inevitably lose the position information of visual artifacts. Moreover, the adversarial loss and reconstruction loss (e.g., `1 loss) are combined with tradeoff weights, which are also difficult to tune. In this paper, we propose a novel detection-based generative framework for image inpainting, which adopts the min-max strategy in an adversarial process. The generator follows an encoder-decoder architecture to fill the missing regions, and the detector using weakly supervised learning localizes the position of artifacts in a pixel-wise manner. Such position information makes the generator pay attention to artifacts and further enhance them. More importantly, we explicitly insert the output of the detector into the reconstruction loss with a weighting criterion, which balances the weight of the adversarial loss and reconstruction loss automatically rather than manual operation. Experiments on multiple public datasets show the superior performance of the proposed framework. The source code is available at https://github.com/Evergrow/GDN_Inpainting.