Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Jiang, Haiyong"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • No Thumbnail Available
    Item
    Combating Spurious Correlations in Loose-fitting Garment Animation Through Joint-Specific Feature Learning
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Diao, Junqi; Xiao, Jun; He, Yihong; Jiang, Haiyong; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    We address the 3D animation of loose-fitting garments from a sequence of body motions. State-of-the-art approaches treat all body joints as a whole to encode motion features, which usually gives rise to learned spurious correlations between garment vertices and irrelevant joints as shown in Fig. 1. To cope with the issue, we encode temporal motion features in a joint-wise manner and learn an association matrix to map human joints only to most related garment regions by encouraging its sparsity. In this way, spurious correlations are mitigated and better performance is achieved. Furthermore, we devise the joint-specific pose space deformation (PSD) to decompose the high-dimensional displacements as the combination of dynamic details caused by individual joint poses. Extensive experiments show that our method outperforms previous works in most indicators. Moreover, garment animations are not interfered with by artifacts caused by spurious correlations, which further validates the effectiveness of our approach. The code is available at https://github.com/qiji77/JointNet.
  • Loading...
    Thumbnail Image
    Item
    Unsupervised Learning of Disentangled 3D Representation from a Single Image
    (The Eurographics Association, 2021) Lv, Junliang; Jiang, Haiyong; Xiao, Jun; Bittner, Jirí and Waldner, Manuela
    Learning 3D representation of a single image is challenging considering the ambiguity, occlusion, and perspective project of an object in an image. Previous works either seek image annotation or 3D supervision to learn meaningful factors of an object or employ a StyleGAN-like framework for image synthesis. While the first ones rely on tedious annotation and even dense geometry ground truth, the second solutions usually cannot guarantee consistency of shapes between different view images. In this paper, we combine the advantages of both frameworks and propose an image disentanglement method based on 3D representation. Results show our method facilitates unsupervised 3D representation learning while preserving consistency between images.

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback