Browsing by Author "Simon, Tomas"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Advances in Neural Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2022) Tewari, Ayush; Thies, Justus; Mildenhall, Ben; Srinivasan, Pratul; Tretschk, Edith; Wang, Yifan; Lassner, Christoph; Sitzmann, Vincent; Martin-Brualla, Ricardo; Lombardi, Stephen; Simon, Tomas; Theobalt, Christian; Nießner, Matthias; Barron, Jon T.; Wetzstein, Gordon; Zollhöfer, Michael; Golyanik, Vladislav; Meneveaux, Daniel; Patanè, GiuseppeSynthesizing photo-realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations of geometry and material properties as input. Collectively, these inputs define the actual scene and what is rendered, and are referred to as the scene representation (where a scene consists of one or more objects). Example scene representations are triangle meshes with accompanied textures (e.g., created by an artist), point clouds (e.g., from a depth sensor), volumetric grids (e.g., from a CT scan), or implicit surface functions (e.g., truncated signed distance fields). The reconstruction of such a scene representation from observations using differentiable rendering losses is known as inverse graphics or inverse rendering. Neural rendering is closely related, and combines ideas from classical computer graphics and machine learning to create algorithms for synthesizing images from real-world observations. Neural rendering is a leap forward towards the goal of synthesizing photo-realistic image and video content. In recent years, we have seen immense progress in this field through hundreds of publications that show different ways to inject learnable components into the rendering pipeline. This state-of-the-art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. A key advantage of these methods is that they are 3D-consistent by design, enabling applications such as novel viewpoint synthesis of a captured scene. In addition to methods that handle static scenes, we cover neural scene representations for modeling nonrigidly deforming objects and scene editing and composition. While most of these approaches are scene-specific, we also discuss techniques that generalize across object classes and can be used for generative tasks. In addition to reviewing these state-ofthe- art methods, we provide an overview of fundamental concepts and definitions used in the current literature. We conclude with a discussion on open challenges and social implications.Item State of the Art on Neural Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2020) Tewari, Ayush; Fried, Ohad; Thies, Justus; Sitzmann, Vincent; Lombardi, Stephen; Sunkavalli, Kalyan; Martin-Brualla, Ricardo; Simon, Tomas; Saragih, Jason; Nießner, Matthias; Pandey, Rohit; Fanello, Sean; Wetzstein, Gordon; Zhu, Jun-Yan; Theobalt, Christian; Agrawala, Maneesh; Shechtman, Eli; Goldman, Dan B.; Zollhöfer, Michael; Mantiuk, Rafal and Sundstedt, VeronicaEfficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on the type of control, i.e., how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. The second half of this state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.