Impact of Visual, Auditory, and Mixed Interfaces on Human-Robot Collaboration in Multi-Robot Environments
Loading...
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
In the field of Human Robot Collaboration (HRC) research, many studies have explored the use of visual and/or auditory cues as robot caution interfaces. However, many of these studies have focused on interfaces, such as displays of a single robot's future position or hazardous areas, without validating them in complex environments where multiple robots operate simultaneously and users need to perceive and respond to multiple robots at once. An increase in the number of robots can exceed human cognitive limits, potentially leading to a decrease in safety and operational efficiency. To achieve safe and work efficient HRC in environments with multiple robots, we proposed a design for auditory and visual augmented reality interfaces to help workers be aware of multiple robots. We evaluated both single-modal and multi-modal interfaces under varying numbers of robots in the environment to explore how user perception and safety are affected. We conducted a comparative evaluation using multiple metrics, including the number of collisions, the closest distance to a robot, interface response time, task completion time, and subjective measures. Although multi-modal interfaces can reduce the average number of collisions by approximately 19%- 49% compared to single-modal interfaces, and generally outperform them, their relative advantage diminished as the number of robots increased. This may be attributed to the physical limitations of the environment, where avoiding multiple robots simultaneously becomes inherently difficult, thereby reducing the impact of interface design on user performance.
Description
CCS Concepts: Human-centered computing → Mixed / augmented reality; Human computer interaction (HCI)
@inproceedings{10.2312:egve.20251344,
booktitle = {ICAT-EGVE 2025 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {Jorge, Joaquim A. and Sakata, Nobuchika},
title = {{Impact of Visual, Auditory, and Mixed Interfaces on Human-Robot Collaboration in Multi-Robot Environments}},
author = {Nagahara, Takumi and Techasarntikul, Nattaon and Ohsita, Yuichi and Shimonishi, Hideyuki},
year = {2025},
publisher = {The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-278-3},
DOI = {10.2312/egve.20251344}
}
