Evaluating AI-based static stereoscopic rendering of indoor panoramic scenes

dc.contributor.authorJashari, Saraen_US
dc.contributor.authorTukur, Muhammaden_US
dc.contributor.authorBoraey, Yehiaen_US
dc.contributor.authorAlzubaidi, Mahmooden_US
dc.contributor.authorPintore, Giovannien_US
dc.contributor.authorGobbetti, Enricoen_US
dc.contributor.authorVillanueva, Alberto Jaspeen_US
dc.contributor.authorSchneider, Jensen_US
dc.contributor.authorFetais, Nooraen_US
dc.contributor.authorAgus, Marcoen_US
dc.contributor.editorCaputo, Arielen_US
dc.contributor.editorGarro, Valeriaen_US
dc.contributor.editorGiachetti, Andreaen_US
dc.contributor.editorCastellani, Umbertoen_US
dc.contributor.editorDulecha, Tinsae Gebrechristosen_US
dc.date.accessioned2024-11-11T12:47:57Z
dc.date.available2024-11-11T12:47:57Z
dc.date.issued2024
dc.description.abstractPanoramic imaging has recently become an extensively used technology for the representation and exploration of indoor environments. Panoramic cameras generate omnidirectional images that provide a comprehensive 360-degree view, making them a valuable tool for applications such as virtual tours in real estate, architecture, and cultural heritage. However, constructing truly immersive experiences from panoramic images presents challenges, particularly in generating panoramic stereo pairs that offer consistent depth cues and visual comfort across all viewing directions. Traditional stereo-imaging techniques do not directly apply to spherical panoramic images, requiring complex processing to avoid artifacts that can disrupt immersion. To address these challenges, various imaging and processing technologies have been developed, including multi-camera systems and computational methods that generate stereo images from a single panoramic input. Although effective, these solutions often involve complicated hardware and processing pipelines. Recently, deep learning approaches have emerged, enabling novel view generation from single panoramic images. While these methods show promise, they have not yet been thoroughly evaluated in practical scenarios. This paper presents a series of evaluation experiments aimed at assessing different technologies for creating static stereoscopic environments from omnidirectional imagery, with a focus on 3DOF immersive exploration. A user study was conducted using a WebXR prototype and a Meta Quest 3 headset to quantitatively and qualitatively compare traditional image composition techniques with AI-based methods. Our results indicate that while traditional methods provide a satisfactory level of immersion, AI-based generation is nearing a quality level suitable for deployment in web-based environments.en_US
dc.description.sectionheadersComputer Vision
dc.description.seriesinformationSmart Tools and Applications in Graphics - Eurographics Italian Chapter Conference
dc.identifier.doi10.2312/stag.20241333
dc.identifier.isbn978-3-03868-265-3
dc.identifier.issn2617-4855
dc.identifier.pages10 pages
dc.identifier.urihttps://doi.org/10.2312/stag.20241333
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/stag20241333
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies → Computer vision; Virtual reality; Neural networks
dc.subjectComputing methodologies → Computer vision
dc.subjectVirtual reality
dc.subjectNeural networks
dc.titleEvaluating AI-based static stereoscopic rendering of indoor panoramic scenesen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
stag20241333.pdf
Size:
12.1 MB
Format:
Adobe Portable Document Format