Browsing by Author "zhu, xiangyu"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item 3D Keypoint Estimation Using Implicit Representation Learning(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Xiangyu; Du, Dong; Huang, Haibin; Ma, Chongyang; Han, Xiaoguang; Memari, Pooran; Solomon, JustinIn this paper, we tackle the challenging problem of 3D keypoint estimation of general objects using a novel implicit representation. Previous works have demonstrated promising results for keypoint prediction through direct coordinate regression or heatmap-based inference. However, these methods are commonly studied for specific subjects, such as human bodies and faces, which possess fixed keypoint structures. They also suffer in several practical scenarios where explicit or complete geometry is not given, including images and partial point clouds. Inspired by the recent success of advanced implicit representation in reconstruction tasks, we explore the idea of using an implicit field to represent keypoints. Specifically, our key idea is employing spheres to represent 3D keypoints, thereby enabling the learnability of the corresponding signed distance field. Explicit keypoints can be extracted subsequently by our algorithm based on the Hough transform. Quantitative and qualitative evaluations also show the superiority of our representation in terms of prediction accuracy.Item Two-phase Hair Image Synthesis by Self-Enhancing Generative Model(The Eurographics Association and John Wiley & Sons Ltd., 2019) Qiu, Haonan; Wang, Chuan; Zhu, Hang; zhu, xiangyu; Gu, Jinjin; Han, Xiaoguang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonGenerating plausible hair image given limited guidance, such as sparse sketches or low-resolution image, has been made possible with the rise of Generative Adversarial Networks (GANs). Traditional image-to-image translation networks can generate recognizable results, but finer textures are usually lost and blur artifacts commonly exist. In this paper, we propose a two-phase generative model for high-quality hair image synthesis. The two-phase pipeline first generates a coarse image by an existing image translation model, then applies a re-generating network with self-enhancing capability to the coarse image. The selfenhancing capability is achieved by a proposed differentiable layer, which extracts the structural texture and orientation maps from a hair image. Extensive experiments on two tasks, Sketch2Hair and Hair Super-Resolution, demonstrate that our approach is able to synthesize plausible hair image with finer details, and reaches the state-of-the-art.