Browsing by Author "Tarini, M."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Texture Inpainting for Photogrammetric Models(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Maggiordomo, A.; Cignoni, P.; Tarini, M.; Hauser, Helwig and Alliez, PierreWe devise a technique designed to remove the texturing artefacts that are typical of 3D models representing real‐world objects, acquired by photogrammetric techniques. Our technique leverages the recent advancements in inpainting of natural colour images, adapting them to the specific context. A neural network, modified and trained for our purposes, replaces the texture areas containing the defects, substituting them with new plausible patches of texels, reconstructed from the surrounding surface texture. We train and apply the network model on locally reparametrized texture patches, so to provide an input that simplifies the learning process, because it avoids any texture seams, unused texture areas, background, depth jumps and so on. We automatically extract appropriate training data from real‐world datasets. We show two applications of the resulting method: one, as a fully automatic tool, addressing all problems that can be detected by analysing the UV‐map of the input model; and another, as an interactive semi‐automatic tool, presented to the user as a 3D ‘fixing’ brush that has the effect of removing artefacts from any zone the users paints on. We demonstrate our method on a variety of real‐world inputs and provide a reference usable implementation.Item Visual Assessments of Functional Maps(The Eurographics Association, 2019) Melzi, S.; Marin, R.; Musoni, P.; Castellani, U.; Tarini, M.; Bommes, David and Huang, HuiShape-matching is one central topic in Geometry Processing, with numerous important applications in Computer Graphics and shape analysis, such as shape registration, shape interpolation, modeling, information transfer and many others. A recent and successful class of shape-matching methods is based on the functional maps framework [OBCS*12] where the correspondences between the two surfaces is described in terms of a mapping between functions. Several effective approaches have been proposed to produce accurate and reliable functional maps, leading to need for a way to assess the quality of a given solution. In particular, standard quantitative evaluation methods focus mainly on the global matching error disregarding the annoying effects of wrong correspondences along the surface details. Therefore, in this context, it is very important to pair quantitative numeric evaluations with a visual, qualitative assessment. Although this is usually not recognized as a problem, the latter task is not trivial, and we argue that the commonly employed solutions suffer from important limitations. In this work, we offer a new visual evaluation method which is based on the transfer of the object-space normals across the two spaces and then visualize the resulting lighting. In spite of its simplicity, this method produces readable images that allow subtleties of the mapping to be discerned, and improve direct comparability of alternative results.