Trust and Safety in Autonomous Vehicles: Evaluating Contextual Visualizations for Highlighting, Prediction, and Anchoring

dc.contributor.authorTran, Thi Thanh Hoaen_US
dc.contributor.authorPeillard, Etienneen_US
dc.contributor.authorWalsh, Jamesen_US
dc.contributor.authorMoreau, Guillaumeen_US
dc.contributor.authorThomas, Bruceen_US
dc.contributor.editorJorge, Joaquim A.en_US
dc.contributor.editorSakata, Nobuchikaen_US
dc.date.accessioned2025-11-26T09:21:57Z
dc.date.available2025-11-26T09:21:57Z
dc.date.issued2025
dc.description.abstractFor autonomous vehicles (AVs) to be widely accepted, users must not only feel safe but also understand how the vehicle perceives and responds to its environment. Augmented Reality (AR) enables real-time, intuitive communication of such information, helping foster trust and enhance situation awareness (SA). This paper presents the results of three online user studies that investigate the design of different AR visualization strategies in simulated AV environments. Although the studies used prerecorded videos, they were designed to simulate ecologically realistic driving scenarios. Study 1 evaluates six types of highlight visualizations (bounding box, spotlight, point arrow, zoom, semantic segmentation, and baseline) across five driving scenarios varying in complexity and visibility. The results show that highlight effectiveness is scenario-dependent, with bounding boxes and spotlights being more effective in occluded or ambiguous conditions. Study 2 explores predictive visualizations, comparing single vs. multiple predicted paths and goals to communicate future trajectories. Findings indicate that single-path predictions are most effective for enhancing trust and safety, while multi-goal visualizations are perceived as less clear and less helpful. Study 3 examines the impact of spatial anchoring in AR by comparing screen-fixed and world-fixed presentations of timeto- contact information. Results demonstrate that world-fixed visualizations significantly improve trust, perceived safety, and object detectability compared to screen-fixed displays. Together, these studies provide key insights into when, what, and how AR visualizations should be presented in AVs to effectively support passenger understanding. The findings inform the design of adaptive AR interfaces that tailor visual feedback based on scenario complexity, uncertainty, and environmental context.en_US
dc.description.sectionheadersInterfaces
dc.description.seriesinformationICAT-EGVE 2025 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
dc.identifier.doi10.2312/egve.20251351
dc.identifier.isbn978-3-03868-278-3
dc.identifier.issn1727-530X
dc.identifier.pages10 pages
dc.identifier.urihttps://doi.org/10.2312/egve.20251351
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/egve20251351
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Human-centered computing → Mixed / augmented reality
dc.subjectHuman centered computing → Mixed / augmented reality
dc.titleTrust and Safety in Autonomous Vehicles: Evaluating Contextual Visualizations for Highlighting, Prediction, and Anchoringen_US
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
egve20251351.pdf
Size:
1.83 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
paper1002_mm.pdf
Size:
1.58 MB
Format:
Adobe Portable Document Format
Collections