Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence
dc.contributor.author | Riva, Alessandro | en_US |
dc.contributor.author | Raganato, Alessandro | en_US |
dc.contributor.author | Melzi, Simone | en_US |
dc.contributor.editor | Caputo, Ariel | en_US |
dc.contributor.editor | Garro, Valeria | en_US |
dc.contributor.editor | Giachetti, Andrea | en_US |
dc.contributor.editor | Castellani, Umberto | en_US |
dc.contributor.editor | Dulecha, Tinsae Gebrechristos | en_US |
dc.date.accessioned | 2024-11-11T12:48:31Z | |
dc.date.available | 2024-11-11T12:48:31Z | |
dc.date.issued | 2024 | |
dc.description.abstract | Current data-driven methodologies for point cloud matching demand extensive training time and computational resources, presenting significant challenges for model deployment and application. In the point cloud matching task, recent advancements with an encoder-only Transformer architecture have revealed the emergence of semantically meaningful patterns in the attention heads, particularly resembling Gaussian functions centered on each point of the input shape. In this work, we further investigate this phenomenon by integrating these patterns as fixed attention weights within the attention heads of the Transformer architecture. We evaluate two variants: one utilizing predetermined variance values for the Gaussians, and another where the variance values are treated as learnable parameters. Additionally we analyze the performances on noisy data and explore a possible way to improve robustness to noise. Our findings demonstrate that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization. Furthermore, we conducted an ablation study to identify the specific layers where the infused information is most impactful and to understand the reliance of the network on this information. | en_US |
dc.description.sectionheaders | Shape Analysis | |
dc.description.seriesinformation | Smart Tools and Applications in Graphics - Eurographics Italian Chapter Conference | |
dc.identifier.doi | 10.2312/stag.20241345 | |
dc.identifier.isbn | 978-3-03868-265-3 | |
dc.identifier.issn | 2617-4855 | |
dc.identifier.pages | 10 pages | |
dc.identifier.uri | https://doi.org/10.2312/stag.20241345 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.2312/stag20241345 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies → Machine learning; Shape analysis; Theory of computation → Computational geometry | |
dc.subject | Computing methodologies → Machine learning | |
dc.subject | Shape analysis | |
dc.subject | Theory of computation → Computational geometry | |
dc.title | Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence | en_US |
Files
Original bundle
1 - 1 of 1