Browsing by Author "Gao, Lin"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Data‐Driven Shape Interpolation and Morphing Editing(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Gao, Lin; Chen, Shu‐Yu; Lai, Yu‐Kun; Xia, Shihong; Chen, Min and Zhang, Hao (Richard)Shape interpolation has many applications in computer graphics such as morphing for computer animation. In this paper, we propose a novel data‐driven mesh interpolation method. We adapt patch‐based linear rotational invariant coordinates to effectively represent deformations of models in a shape collection, and utilize this information to guide the synthesis of interpolated shapes. Unlike previous data‐driven approaches, we use a rotation/translation invariant representation which defines the plausible deformations in a global continuous space. By effectively exploiting the knowledge in the shape space, our method produces realistic interpolation results at interactive rates, outperforming state‐of‐the‐art methods for challenging cases. We further propose a novel approach to interactive editing of shape morphing according to the shape distribution. The user can explore the morphing path and select example models intuitively and adjust the path with simple interactions to edit the morphing sequences. This provides a useful tool to allow users to generate desired morphing with little effort. We demonstrate the effectiveness of our approach using various examples.Shape interpolation has many applications in computer graphics such as morphing for computer animation. In this paper, we propose a novel data‐driven mesh interpolation method. We adapt patch‐based linear rotational invariant coordinates to effectively represent deformations of models in a shape collection, and utilize this information to guide the synthesis of interpolated shapes. Unlike previous data‐driven approaches, we use a rotation/translation invariant representation which defines the plausible deformations in a global continuous space. By effectively exploiting the knowledge in the shape space, our method produces realistic interpolation results at interactive rates, outperforming state‐of‐the‐art methods for challenging cases.Item Deep Deformation Detail Synthesis for Thin Shell Models(The Eurographics Association and John Wiley & Sons Ltd., 2023) Chen, Lan; Gao, Lin; Yang, Jie; Xu, Shibiao; Ye, Juntao; Zhang, Xiaopeng; Lai, Yu-Kun; Memari, Pooran; Solomon, JustinIn physics-based cloth animation, rich folds and detailed wrinkles are achieved at the cost of expensive computational resources and huge labor tuning. Data-driven techniques make efforts to reduce the computation significantly by utilizing a preprocessed database. One type of methods relies on human poses to synthesize fitted garments, but these methods cannot be applied to general cloth animations. Another type of methods adds details to the coarse meshes obtained through simulation, which does not have such restrictions. However, existing works usually utilize coordinate-based representations which cannot cope with large-scale deformation, and requires dense vertex correspondences between coarse and fine meshes. Moreover, as such methods only add details, they require coarse meshes to be sufficiently close to fine meshes, which can be either impossible, or require unrealistic constraints to be applied when generating fine meshes. To address these challenges, we develop a temporally and spatially as-consistent-as-possible deformation representation (named TS-ACAP) and design a DeformTransformer network to learn the mapping from low-resolution meshes to ones with fine details. This TS-ACAP representation is designed to ensure both spatial and temporal consistency for sequential large-scale deformations from cloth animations. With this TS-ACAP representation, our DeformTransformer network first utilizes two mesh-based encoders to extract the coarse and fine features using shared convolutional kernels, respectively. To transduct the coarse features to the fine ones, we leverage the spatial and temporal Transformer network that consists of vertex-level and frame-level attention mechanisms to ensure detail enhancement and temporal coherence of the prediction. Experimental results show that our method is able to produce reliable and realistic animations in various datasets at high frame rates with superior detail synthesis abilities compared to existing methods.Item EL-GAN: Edge-Enhanced Generative Adversarial Network for Layout-to-Image Generation(The Eurographics Association and John Wiley & Sons Ltd., 2022) Gao, Lin; Wu, Lei; Meng, Xiangxu; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneAlthough some progress has been made in the layout-to-image generation of complex scenes with multiple objects, object-level generation still suffers from distortion and poor recognizability. We argue that this is caused by the lack of feature encodings for edge information during image generation. In order to solve these limitations, we propose a novel edge-enhanced Generative Adversarial Network for layout-to-image generation (termed EL-GAN). The feature encodings of edge information are learned from the multi-level features output by the generator and iteratively optimized along the generator's pipeline. Two new components are included at each generator level to enable multi-scale learning. Specifically, one is the edge generation module (EGM), which is responsible for converting the output of the multi-level features by the generator into images of different scales and extracting their edge maps. The other is the edge fusion module (EFM), which integrates the feature encodings refined from the edge maps into the subsequent image generation process by modulating the parameters in the normalization layers. Meanwhile, the discriminator is fed with frequency-sensitive image features, which greatly enhances the generation quality of the image's high-frequency edge contours and low-frequency regions. Extensive experiments show that EL-GAN outperforms the state-of-the-art methods on the COCO-Stuff and Visual Genome datasets. Our source code is available at https://github.com/Azure616/EL-GAN.