Browsing by Author "Sun, Qi"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Effective User Studies in Computer Graphics(The Eurographics Association, 2023) Malpica, Sandra; Sun, Qi; Kellnhofer, Petr; Beacco, Alejandro; Senel, Gizem; McDonnell, Rachel; Flores Vargas, Mauricio; Serrano, Ana; Slusallek, PhilippUser studies are a useful tool for researchers, allowing them to collect data on how users perceive, interact with and process different types of sensory information. If planned in advance, user experiments can be leveraged in every stage of a research project, from early design, prototyping and feature exploration to applied proofs of concept, passing through validation and data collection for model training. User studies can provide the researcher with different types of information depending on the chosen methodology: user performance metrics, surveys and interviews, field studies, physiological data, etc. Considering human perception and other cognitive processes is particularly important in computer graphics, where most research produces outputs whose ultimate purpose is to be seen or perceived by a human. Being able to measure in an objective and systematic way how the information we generate is integrated into the representational space humans create to situate themselves in the world means that researchers will have more information to implement optimal algorithms, tools and techniques. In this tutorial we will give an overview of good practices for user studies in computer graphics with a particular focus on virtual reality use cases. We will cover the basics on how to design, carry out and analyze good user studies, as well as different particularities to be taken into account in immersive environments.Item A Graph-based One-Shot Learning Method for Point Cloud Recognition(The Eurographics Association and John Wiley & Sons Ltd., 2020) Fan, Zhaoxin; Liu, Hongyan; He, Jun; Sun, Qi; Du, Xiaoyong; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LuePoint cloud based 3D vision tasks, such as 3D object recognition, are critical to many real world applications such as autonomous driving. Many point cloud processing models based on deep learning have been proposed by researchers recently. However, they are all large-sample dependent, which means that a large amount of manually labelled training data are needed to train the model, resulting in huge labor cost. In this paper, to tackle this problem, we propose a One-Shot learning model for Point Cloud Recognition, namely OS-PCR. Different from previous methods, our method formulates a new setting, where the model only needs to see one sample per class once for memorizing at inference time when new classes are needed to be recognized. To fulfill this task, we design three modules in the model: an Encoder Module, an Edge-conditioned Graph Convolutional Network Module, and a Query Module. To evaluate the performance of the proposed model, we build a one-shot learning benchmark dataset for 3D point cloud analysis. Then, comprehensive experiments are conducted on it to demonstrate the effectiveness of our proposed model.Item SRNet: A 3D Scene Recognition Network using Static Graph and Dense Semantic Fusion(The Eurographics Association and John Wiley & Sons Ltd., 2020) Fan, Zhaoxin; Liu, Hongyan; He, Jun; Sun, Qi; Du, Xiaoyong; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LuePoint cloud based 3D scene recognition is fundamental to many real world applications such as Simultaneous Localization and Mapping (SLAM). However, most of existing methods do not take full advantage of the contextual semantic features of scenes. And their recognition abilities are severely affected by dynamic noise such as points of cars and pedestrians in the scene. To tackle these issues, we propose a new Scene Recognition Network, namely SRNet. In this model, to learn local features without being affected by dynamic noise, we propose Static Graph Convolution (SGC) module, which are then stacked as our backbone. Next, to further avoid dynamic noise, we introduce a Spatial Attention Module (SAM) to make the feature descriptor pay more attention to immovable local areas that are more relevant to our task. Finally, in order to make a more profound sense of the scene, we design a Dense Semantic Fusion (DSF) strategy to integrate multi-level features during feature propagation, which helps the model deepen its understanding of the contextual semantics of scenes. By utilizing these designs, SRNet can map scenes to discriminative and generalizable feature vectors, which are then used for finding matching pairs. Experimental studies demonstrate that SRNet achieves new state-of-the-art on scene recognition and shows good generalization ability to other point cloud based tasks.