Browsing by Author "Han, Xiaoguang"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item 3D Keypoint Estimation Using Implicit Representation Learning(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Xiangyu; Du, Dong; Huang, Haibin; Ma, Chongyang; Han, Xiaoguang; Memari, Pooran; Solomon, JustinIn this paper, we tackle the challenging problem of 3D keypoint estimation of general objects using a novel implicit representation. Previous works have demonstrated promising results for keypoint prediction through direct coordinate regression or heatmap-based inference. However, these methods are commonly studied for specific subjects, such as human bodies and faces, which possess fixed keypoint structures. They also suffer in several practical scenarios where explicit or complete geometry is not given, including images and partial point clouds. Inspired by the recent success of advanced implicit representation in reconstruction tasks, we explore the idea of using an implicit field to represent keypoints. Specifically, our key idea is employing spheres to represent 3D keypoints, thereby enabling the learnability of the corresponding signed distance field. Explicit keypoints can be extracted subsequently by our algorithm based on the Hough transform. Quantitative and qualitative evaluations also show the superiority of our representation in terms of prediction accuracy.Item DiffusionPointLabel: Annotated Point Cloud Generation with Diffusion Model(The Eurographics Association and John Wiley & Sons Ltd., 2022) Li, Tingting; Fu, Yunfei; Han, Xiaoguang; Liang, Hui; Zhang, Jian Jun; Chang, Jian; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtiennePoint cloud generation aims to synthesize point clouds that do not exist in supervised dataset. Generating a point cloud with certain semantic labels remains an under-explored problem. This paper proposes a formulation called DiffusionPointLabel, which completes point-label pair generation based on a DDPM generative model (Denoising Diffusion Probabilistic Model). Specifically, we use a point cloud diffusion generative model and aggregate the intermediate features of the generator. On top of this, we propose Feature Interpreter that transforms intermediate features into semantic labels. Furthermore, we employ an uncertainty measure to filter unqualified point-label pairs for a better quality of generated point cloud dataset. Coupling these two designs enables us to automatically generate annotated point clouds, especially when supervised point-labels pairs are scarce. Our method extends the application of point cloud generation models and surpasses state-of-the-art models.Item GA-Sketching: Shape Modeling from Multi-View Sketching with Geometry-Aligned Deep Implicit Functions(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhou, Jie; Luo, Zhongjin; Yu, Qian; Han, Xiaoguang; Fu, Hongbo; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Sketch-based shape modeling aims to bridge the gap between 2D drawing and 3D modeling by providing an intuitive and accessible approach to create 3D shapes from 2D sketches. However, existing methods still suffer from limitations in reconstruction quality and multi-view interaction friendliness, hindering their practical application. This paper proposes a faithful and user-friendly iterative solution to tackle these limitations by learning geometry-aligned deep implicit functions from one or multiple sketches. Our method lifts 2D sketches to volume-based feature tensors, which align strongly with the output 3D shape, enabling accurate reconstruction and faithful editing. Such a geometry-aligned feature encoding technique is well-suited to iterative modeling since features from different viewpoints can be easily memorized or aggregated. Based on these advantages, we design a unified interactive system for sketch-based shape modeling. It enables users to generate the desired geometry iteratively by drawing sketches from any number of viewpoints. In addition, it allows users to edit the generated surface by making a few local modifications. We demonstrate the effectiveness and practicality of our method with extensive experiments and user studies, where we found that our method outperformed existing methods in terms of accuracy, efficiency, and user satisfaction. The source code of this project is available at https://github.com/LordLiang/GA-Sketching.Item Learning Part Generation and Assembly for Sketching Man‐Made Objects(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Du, Dong; Zhu, Heming; Nie, Yinyu; Han, Xiaoguang; Cui, Shuguang; Yu, Yizhou; Liu, Ligang; Benes, Bedrich and Hauser, HelwigModeling 3D objects on existing software usually requires a heavy amount of interactions, especially for users who lack basic knowledge of 3D geometry. Sketch‐based modeling is a solution to ease the modelling procedure and thus has been researched for decades. However, modelling a man‐made shape with complex structures remains challenging. Existing methods adopt advanced deep learning techniques to map holistic sketches to 3D shapes. They are still bottlenecked to deal with complicated topologies. In this paper, we decouple the task of sketch2shape into a part generation module and a part assembling module, where deep learning methods are leveraged for the implementation of both modules. By changing the focus from holistic shapes to individual parts, it eases the learning process of the shape generator and guarantees high‐quality outputs. With the learned automated part assembler, users only need a little manual tuning to obtain a desired layout. Extensive experiments and user studies demonstrate the usefulness of our proposed system.Item Two-phase Hair Image Synthesis by Self-Enhancing Generative Model(The Eurographics Association and John Wiley & Sons Ltd., 2019) Qiu, Haonan; Wang, Chuan; Zhu, Hang; zhu, xiangyu; Gu, Jinjin; Han, Xiaoguang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonGenerating plausible hair image given limited guidance, such as sparse sketches or low-resolution image, has been made possible with the rise of Generative Adversarial Networks (GANs). Traditional image-to-image translation networks can generate recognizable results, but finer textures are usually lost and blur artifacts commonly exist. In this paper, we propose a two-phase generative model for high-quality hair image synthesis. The two-phase pipeline first generates a coarse image by an existing image translation model, then applies a re-generating network with self-enhancing capability to the coarse image. The selfenhancing capability is achieved by a proposed differentiable layer, which extracts the structural texture and orientation maps from a hair image. Extensive experiments on two tasks, Sketch2Hair and Hair Super-Resolution, demonstrate that our approach is able to synthesize plausible hair image with finer details, and reaches the state-of-the-art.