Browsing by Author "Xiao, Jun"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Combating Spurious Correlations in Loose-fitting Garment Animation Through Joint-Specific Feature Learning(The Eurographics Association and John Wiley & Sons Ltd., 2023) Diao, Junqi; Xiao, Jun; He, Yihong; Jiang, Haiyong; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.We address the 3D animation of loose-fitting garments from a sequence of body motions. State-of-the-art approaches treat all body joints as a whole to encode motion features, which usually gives rise to learned spurious correlations between garment vertices and irrelevant joints as shown in Fig. 1. To cope with the issue, we encode temporal motion features in a joint-wise manner and learn an association matrix to map human joints only to most related garment regions by encouraging its sparsity. In this way, spurious correlations are mitigated and better performance is achieved. Furthermore, we devise the joint-specific pose space deformation (PSD) to decompose the high-dimensional displacements as the combination of dynamic details caused by individual joint poses. Extensive experiments show that our method outperforms previous works in most indicators. Moreover, garment animations are not interfered with by artifacts caused by spurious correlations, which further validates the effectiveness of our approach. The code is available at https://github.com/qiji77/JointNet.Item Progressive Graph Matching Network for Correspondences(The Eurographics Association, 2023) Feng, Huihang; Liu, Lupeng; Xiao, Jun; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.This paper presents a progressive graph matching network shorted as PGMNet. The method is more explainable and can match features from easy to hard. PGMNet contains two major blocks: sinkformers module and guided attention module. First, we use sinkformers to get the similar matrix which can be seen as an assignment matrix between two sets of feature keypoints. Matches with highest scores in both rows and columns are selected as pre-matched correspondences. These pre-matched matches can be leveraged to guide the update and matching of ambiguous features. The matching quality can be progressively improved as the the transformer blocks go deeper as visualized in Figure 1. Experiments show that our method achieves better results with typical attention-based methods.Item Unsupervised Learning of Disentangled 3D Representation from a Single Image(The Eurographics Association, 2021) Lv, Junliang; Jiang, Haiyong; Xiao, Jun; Bittner, Jirí and Waldner, ManuelaLearning 3D representation of a single image is challenging considering the ambiguity, occlusion, and perspective project of an object in an image. Previous works either seek image annotation or 3D supervision to learn meaningful factors of an object or employ a StyleGAN-like framework for image synthesis. While the first ones rely on tedious annotation and even dense geometry ground truth, the second solutions usually cannot guarantee consistency of shapes between different view images. In this paper, we combine the advantages of both frameworks and propose an image disentanglement method based on 3D representation. Results show our method facilitates unsupervised 3D representation learning while preserving consistency between images.