Browsing by Author "Kosinka, Jirí"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item EUROGRAPHICS 2018: Posters Frontmatter(Eurographics Association, 2018) Jain, Eakta; Kosinka, Jirí; Jain, Eakta; Kosinka, JiríItem Semi-Sharp Subdivision Shading(The Eurographics Association, 2022) Zhou, Jun; Boonstra, Jan; Kosinka, Jirí; Peter Vangorp; Martin J. TurnerSubdivision is a method for generating a limit surface from a coarse mesh by recursively dividing its faces into several smaller faces. This process leads to smooth surfaces, but often suffers from shading artifacts near extraordinary points due to the lower quality of the normal field there. The idea of subdivision shading is to apply the same subdivision rules that are used to subdivide geometry to also subdivide the normals associated with mesh vertices. This leads to smoother normal fields, which in turn removes the shading artifacts. However, the original subdivision shading method does not support sharp and semi-sharp creases, which are important ingredients in subdivision surface modelling. We present two approaches to extending subdivision shading to work also on models with (semi-)sharp creases.Item USTNet: Unsupervised Shape-to-Shape Translation via Disentangled Representations(The Eurographics Association and John Wiley & Sons Ltd., 2022) Wang, Haoran; Li, Jiaxin; Telea, Alexandru; Kosinka, Jirí; Wu, Zizhao; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneWe propose USTNet, a novel deep learning approach designed for learning shape-to-shape translation from unpaired domains in an unsupervised manner. The core of our approach lies in disentangled representation learning that factors out the discriminative features of 3D shapes into content and style codes. Given input shapes from multiple domains, USTNet disentangles their representation into style codes that contain distinctive traits across domains and content codes that contain domaininvariant traits. By fusing the style and content codes of the target and source shapes, our method enables us to synthesize new shapes that resemble the target style and retain the content features of source shapes. Based on the shared style space, our method facilitates shape interpolation by manipulating the style attributes from different domains. Furthermore, by extending the basic building blocks of our network from two-class to multi-class classification, we adapt USTNet to tackle multi-domain shape-to-shape translation. Experimental results show that our approach can generate realistic and natural translated shapes and that our method leads to improved quantitative evaluation metric results compared to 3DSNet. Codes are available at https://Haoran226.github.io/USTNet.