Balancing Speed and Visual Fidelity of Dynamic Point Cloud Rendering in VR
| dc.contributor.author | Muehlenbrock, Andre | en_US |
| dc.contributor.author | Weller, Rene | en_US |
| dc.contributor.author | Zachmann, Gabriel | en_US |
| dc.contributor.editor | Jorge, Joaquim A. | en_US |
| dc.contributor.editor | Sakata, Nobuchika | en_US |
| dc.date.accessioned | 2025-11-26T09:22:03Z | |
| dc.date.available | 2025-11-26T09:22:03Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Efficient rendering of dynamic point clouds from multiple RGB-D cameras is essential for a wide range of VR/AR applications. In this work, we introduce and leverage two key parameters in a mesh-based rendering approach and conduct a systematic study of their impact on the trade-off between rendering speed and perceptual quality. We show that both parameters enable substantial performance improvements while causing only negligible visual degradation. Across four GPU generations and multiple deployment scenarios, continuous dynamic point clouds from seven Microsoft Azure Kinects can achieve binocular rendering at triple-digit frame rates, even on mid-range GPUs. Our results provide practical guidelines for balancing visual fidelity and efficiency in real-time VR point cloud rendering, demonstrating that mesh-based approaches are a scalable and versatile solution for applications ranging from consumer headsets to large-scale projection systems. | en_US |
| dc.description.sectionheaders | Rendering and Sensing | |
| dc.description.seriesinformation | ICAT-EGVE 2025 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments | |
| dc.identifier.doi | 10.2312/egve.20251353 | |
| dc.identifier.isbn | 978-3-03868-278-3 | |
| dc.identifier.issn | 1727-530X | |
| dc.identifier.pages | 6 pages | |
| dc.identifier.uri | https://doi.org/10.2312/egve.20251353 | |
| dc.identifier.uri | https://diglib.eg.org/handle/10.2312/egve20251353 | |
| dc.publisher | The Eurographics Association | en_US |
| dc.rights | Attribution 4.0 International License | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | CCS Concepts: Computing methodologies → Rendering; Virtual reality; Point-based models; Mesh geometry models | |
| dc.subject | Computing methodologies → Rendering | |
| dc.subject | Virtual reality | |
| dc.subject | Point | |
| dc.subject | based models | |
| dc.subject | Mesh geometry models | |
| dc.title | Balancing Speed and Visual Fidelity of Dynamic Point Cloud Rendering in VR | en_US |
Files
Original bundle
1 - 1 of 1