Browsing by Author "Wang, Chuan"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Two-phase Hair Image Synthesis by Self-Enhancing Generative Model(The Eurographics Association and John Wiley & Sons Ltd., 2019) Qiu, Haonan; Wang, Chuan; Zhu, Hang; zhu, xiangyu; Gu, Jinjin; Han, Xiaoguang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonGenerating plausible hair image given limited guidance, such as sparse sketches or low-resolution image, has been made possible with the rise of Generative Adversarial Networks (GANs). Traditional image-to-image translation networks can generate recognizable results, but finer textures are usually lost and blur artifacts commonly exist. In this paper, we propose a two-phase generative model for high-quality hair image synthesis. The two-phase pipeline first generates a coarse image by an existing image translation model, then applies a re-generating network with self-enhancing capability to the coarse image. The selfenhancing capability is achieved by a proposed differentiable layer, which extracts the structural texture and orientation maps from a hair image. Extensive experiments on two tasks, Sketch2Hair and Hair Super-Resolution, demonstrate that our approach is able to synthesize plausible hair image with finer details, and reaches the state-of-the-art.Item UprightRL: Upright Orientation Estimation of 3D Shapes via Reinforcement Learning(The Eurographics Association and John Wiley & Sons Ltd., 2021) Chen, Luanmin; Xu, Juzhan; Wang, Chuan; Huang, Haibin; Huang, Hui; Hu, Ruizhen; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranIn this paper, we study the problem of 3D shape upright orientation estimation from the perspective of reinforcement learning, i.e. we teach a machine (agent) to orientate 3D shapes step by step to upright given its current observation. Unlike previous methods, we take this problem as a sequential decision-making process instead of a strong supervised learning problem. To achieve this, we propose UprightRL, a deep network architecture designed for upright orientation estimation. UprightRL mainly consists of two submodules: an Actor module and a Critic module which can be learned with a reinforcement learning manner. Specifically, the Actor module selects an action from the action space to perform a point cloud transformation and obtain the new point cloud for the next environment state, while the Critic module evaluates the strategy and guides the Actor to choose the next stage action. Moreover, we design a reward function that encourages the agent to select action which is conducive to orient model towards upright orientation with a positive reward and negative otherwise. We conducted extensive experiments to demonstrate the effectiveness of the proposed model, and experimental results show that our network outperforms the stateof- the-art. We also apply our method to the robot grasping-and-placing experiment, to reveal the practicability of our method.