Browsing by Author "Eisert, Peter"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Ernst Grube: A Contemporary Witness and His Memories Preserved with Volumetric Video(The Eurographics Association, 2020) Worchel, Markus; Zepp, Marcus; Hu, Weiwen; Schreer, Oliver; Feldmann, Ingo; Eisert, Peter; Spagnuolo, Michela and Melero, Francisco Javier''Ernst Grube - The Legacy'' is an immersive Virtual Reality documentary about the life of Ernst Grube, one of the last German Holocaust survivors. From interviews conducted inside a volumetric capture studio, dynamic full-body reconstructions of both, the contemporary witness and its interviewer, are recovered. The documentary places them in virtual recreations of historical sites and viewers experience the interviews with unconstrained motion. As a step towards the documentary's production, prior work presents reconstruction results for one interview. However, the quality is unsatisfying and does not meet the requirements of the historical context. In this paper, we take the next step and revise the used volumetric reconstruction pipeline. We show that our improvements to depth estimation and a new depth map fusion method lead to a more robust reconstruction process and that our revised pipeline produces high-quality volumetric assets. By integrating one of our assets into a virtual scene, we provide a first impression of the documentary's look and the convincing appearance of protagonists in the virtual environment.Item Local Remote Photoplethysmography Signal Analysis for Application in Presentation Attack Detection(The Eurographics Association, 2019) Kossack, Benjamin; Wisotzky, Eric L.; Hilsmann, Anna; Eisert, Peter; Schulz, Hans-Jörg and Teschner, Matthias and Wimmer, MichaelThis paper presents a method to analyze and visualize the local blood flow through human skin tissue within the face and neck. The method is based on the local signal characteristics and extracts and analyses the local propagation of blood flow from video recordings. In a first step, the global pulse rate is identified in RGB images using normalized green color channel intensities. We then calculate for an image sequence, a local remote photoplethysmography (rPPG) signal that is presented by a chrominancebased signal. This local rPPG signal is analyzed and then used to extract the local blood flow propagation from signal-to-noise ratio (SNR) and pulse transit time (PTT) maps. These maps are used to visualize the propagation of the blood flow (PTT) and reveal the signal quality of each spatial position (SNR). We further proved a novel pulse rate based skin segmentation method, that is based on the global pulse rate and the statistical properties of the SNR map. This skin segmentation method allows a direct application in liveliness detection, e.g., for presentation attack detection (PAD). Based on the described local blood flow analysis, we propose a PAD system, that specializes in identifying a partial face and neck coverage in the video. The system is tested using datasets showing a person with different facial coverings, such as a mask or a thick layer of makeup. All tested masks can be detected and identified as presentation attacks.Item Towards L-System Captioning for Tree Reconstruction(The Eurographics Association, 2023) Magnusson, Jannes S.; Hilsmann, Anna; Eisert, Peter; Babaei, Vahid; Skouras, MelinaThis work proposes a novel concept for tree and plant reconstruction by directly inferring a Lindenmayer-System (L-System) word representation from image data in an image captioning approach. We train a model end-to-end which is able to translate given images into L-System words as a description of the displayed tree. To prove this concept, we demonstrate the applicability on 2D tree topologies. Transferred to real image data, this novel idea could lead to more efficient, accurate and semantically meaningful tree and plant reconstruction without using error-prone point cloud extraction, and other processes usually utilized in tree reconstruction. Furthermore, this approach bypasses the need for a predefined L-System grammar and enables species-specific L-System inference without biological knowledge.Item Video-Driven Animation of Neural Head Avatars(The Eurographics Association, 2023) Paier, Wolfgang; Hinzer, Paul; Hilsmann, Anna; Eisert, Peter; Guthe, Michael; Grosch, ThorstenWe present a new approach for video-driven animation of high-quality neural 3D head models, addressing the challenge of person-independent animation from video input. Typically, high-quality generative models are learned for specific individuals from multi-view video footage, resulting in person-specific latent representations that drive the generation process. In order to achieve person-independent animation from video input, we introduce an LSTM-based animation network capable of translating person-independent expression features into personalized animation parameters of person-specific 3D head models. Our approach combines the advantages of personalized head models (high quality and realism) with the convenience of video-driven animation employing multi-person facial performance capture.We demonstrate the effectiveness of our approach on synthesized animations with high quality based on different source videos as well as an ablation study.