Browsing by Author "Cao, Juan"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Image Representation on Curved Optimal Triangulation(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Xiao, Yanyang; Cao, Juan; Chen, Zhonggui; Hauser, Helwig and Alliez, PierreImage triangulation aims to generate an optimal partition with triangular elements to represent the given image. One bottleneck in ensuring approximation quality between the original image and a piecewise approximation over the triangulation is the inaccurate alignment of straight edges to the curved features. In this paper, we propose a novel variational method called curved optimal triangulation, where not all edges are straight segments, but may also be quadratic Bézier curves. The energy function is defined as the total approximation error determined by vertex locations, connectivity and bending of edges. The gradient formulas of this function are derived explicitly in closed form to optimize the energy function efficiently. We test our method on several models to demonstrate its efficacy and ability in preserving features. We also explore its applications in the automatic generation of stylization and Lowpoly images. With the same number of vertices, our curved optimal triangulation method generates more accurate and visually pleasing results compared with previous methods that only use straight segments.Item A Style Transfer Network of Local Geometry for 3D Mesh Stylization(The Eurographics Association, 2023) Kang, Hongyuan; Dong, Xiao; Guo, Xufei; Cao, Juan; Chen, Zhonggui; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Style transfer of images develops rapidly, however, only a few studies focus on geometric style transfer on 3D models. In this paper, we propose a style learning network to synthesize local geometric textures with similar styles on source mesh, driven by specific mesh or image features. Our network modifies a source mesh by predicting the displacement of vertices along the normal direction to generate geometric details. To constrain the style of the source mesh to be consistent with a specific style mesh, we define a style loss on 2D projected images of two meshes based on a differentiable renderer. We extract a set of global and local features from multiple views of 3D models via a pre-trained VGG network, driving the deformation of the source mesh based on the style loss. Our network is flexible in style learning as it can extract features from meshes and images to guide geometric deformation. Experiments verify the robustness of the proposed network and show the outperforming results of transferring multiple styles to the source mesh. We also conduct experiments to analyze the effectiveness of network design.