EG2022
Permanent URI for this community
Browse
Browsing EG2022 by Subject "3D imaging"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Fast and Fine Disparity Reconstruction for Wide-baseline Camera Arrays with Deep Neural Networks(The Eurographics Association, 2022) Barrios, Théo; Gerhards, Julien; Prévost, Stéphanie; Loscos, Celine; Sauvage, Basile; Hasic-Telalovic, JasminkaRecently, disparity-based 3D reconstruction for stereo camera pairs and light field cameras have been greatly improved with the uprising of deep learning-based methods. However, only few of these approaches address wide-baseline camera arrays which require specific solutions. In this paper, we introduce a deep-learning based pipeline for multi-view disparity inference from images of a wide-baseline camera array. The network builds a low-resolution disparity map and retains the original resolution with an additional up scaling step. Our solution successfully answers to wide-baseline array configurations and infers disparity for full HD images at interactive times, while reducing quantification error compared to the state of the art.Item From Capture to Immersive Viewing of 3D HDR Point Clouds(The Eurographics Association, 2022) Loscos, Celine; Souchet, Philippe; Barrios, Théo; Valenzise, Giuseppe; Cozot, Rémi; Hahmann, Stefanie; Patow, Gustavo A.The collaborators of the ReVeRY project address the design of a specific grid of cameras, a cost-efficient system that acquires at once several viewpoints, possibly under several exposures and the converting of multiview, multiexposed, video stream into a high quality 3D HDR point cloud. In the last two decades, industries and researchers proposed significant advances in media content acquisition systems in three main directions: increase of resolution and image quality with the new ultra-high-definition (UHD) standard; stereo capture for 3D content; and high-dynamic range (HDR) imaging. Compression, representation, and interoperability of these new media are active research fields in order to reduce data size and be perceptually accurate. The originality of the project is to address both HDR and depth through the entire pipeline. Creativity is enhanced by several tools, which answer challenges at the different stages of the pipeline: camera setup, data processing, capture visualisation, virtual camera controller, compression, perceptually guided immersive visualisation. It is the experience acquired by the researchers of the project that is exposed in this tutorial.Item Improved Lighting Models for Facial Appearance Capture(The Eurographics Association, 2022) Xu, Yingyan; Riviere, Jérémy; Zoss, Gaspard; Chandran, Prashanth; Bradley, Derek; Gotardo, Paulo; Pelechano, Nuria; Vanderhaeghe, DavidFacial appearance capture techniques estimate geometry and reflectance properties of facial skin by performing a computationally intensive inverse rendering optimization in which one or more images are re-rendered a large number of times and compared to real images coming from multiple cameras. Due to the high computational burden, these techniques often make several simplifying assumptions to tame complexity and make the problem more tractable. For example, it is common to assume that the scene consists of only distant light sources, and ignore indirect bounces of light (on the surface and within the surface). Also, methods based on polarized lighting often simplify the light interaction with the surface and assume perfect separation of diffuse and specular reflectance. In this paper, we move in the opposite direction and demonstrate the impact on facial appearance capture quality when departing from these idealized conditions towards models that seek to more accurately represent the lighting, while at the same time minimally increasing computational burden. We compare the results obtained with a state-of-the-art appearance capture method [RGB*20], with and without our proposed improvements to the lighting model.