SRNet: A 3D Scene Recognition Network using Static Graph and Dense Semantic Fusion
Loading...
Date
2020
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association and John Wiley & Sons Ltd.
Abstract
Point cloud based 3D scene recognition is fundamental to many real world applications such as Simultaneous Localization and Mapping (SLAM). However, most of existing methods do not take full advantage of the contextual semantic features of scenes. And their recognition abilities are severely affected by dynamic noise such as points of cars and pedestrians in the scene. To tackle these issues, we propose a new Scene Recognition Network, namely SRNet. In this model, to learn local features without being affected by dynamic noise, we propose Static Graph Convolution (SGC) module, which are then stacked as our backbone. Next, to further avoid dynamic noise, we introduce a Spatial Attention Module (SAM) to make the feature descriptor pay more attention to immovable local areas that are more relevant to our task. Finally, in order to make a more profound sense of the scene, we design a Dense Semantic Fusion (DSF) strategy to integrate multi-level features during feature propagation, which helps the model deepen its understanding of the contextual semantics of scenes. By utilizing these designs, SRNet can map scenes to discriminative and generalizable feature vectors, which are then used for finding matching pairs. Experimental studies demonstrate that SRNet achieves new state-of-the-art on scene recognition and shows good generalization ability to other point cloud based tasks.
Description
@article{10.1111:cgf.14146,
journal = {Computer Graphics Forum},
title = {{SRNet: A 3D Scene Recognition Network using Static Graph and Dense Semantic Fusion}},
author = {Fan, Zhaoxin and Liu, Hongyan and He, Jun and Sun, Qi and Du, Xiaoyong},
year = {2020},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14146}
}