Browsing by Author "Aittala, Miika"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Appearance-Driven Automatic 3D Model Simplification(The Eurographics Association, 2021) Hasselgren, Jon; Munkberg, Jacob; Lehtinen, Jaakko; Aittala, Miika; Laine, Samuli; Bousseau, Adrien and McGuire, MorganWe present a suite of techniques for jointly optimizing triangle meshes and shading models to match the appearance of reference scenes. This capability has a number of uses, including appearance-preserving simplification of extremely complex assets, conversion between rendering systems, and even conversion between geometric scene representations. We follow and extend the classic analysis-by-synthesis family of techniques: enabled by a highly efficient differentiable renderer and modern nonlinear optimization algorithms, our results are driven to minimize the image-space difference to the target scene when rendered in similar viewing and lighting conditions. As the only signals driving the optimization are differences in rendered images, the approach is highly general and versatile: it easily supports many different forward rendering models such as normal mapping, spatially-varying BRDFs, displacement mapping, etc. Supervision through images only is also key to the ability to easily convert between rendering systems and scene representations. We output triangle meshes with textured materials to ensure that the models render efficiently on modern graphics hardware and benefit from, e.g., hardware-accelerated rasterization, ray tracing, and filtered texture lookups. Our system is integrated in a small Python code base, and can be applied at high resolutions and on large models. We describe several use cases, including mesh decimation, level of detail generation, seamless mesh filtering and approximations of aggregate geometry.Item Computational Methods for Capture and Reproduction of Photorealistic Surface Appearance(2016-10-28) Aittala, MiikaThis thesis addresses the problem of capturing and reproducing surface material appearance from real-world examples for use in computer graphics applications. Detailed variation of color, shininess and small-scale shape is a critically important factor in visual plausibility of objects in synthetic images. Capturing these properties relies on measuring reflected light under various viewing and illumination conditions. Existing methods typically employ either complex mechanical devices, or heuristics that sacrifice fidelity for simplicity. Consequently, computer graphics practitioners continue to use manual authoring tools. The thesis introduces three methods for capturing visually rich surface appearance descriptors using simple hardware setups and relatively little measurement data. The specific focus is on capturing detailed spatial variation of the reflectance properties, as opposed to angular variation, which is the primary focus of most previous work. We apply tools from modern data science — in particular, principled optimization-based approaches — to disentangle and explain the various reflectance effects in the scarce measurement data. The first method uses a flat panel monitor as a programmable light source, and an SLR camera to observe reflections off the captured surface. The monitor is used to emit Fourier basis function patterns, which are well suited for isolating the reflectance properties of interest, and also exhibit a rich set of mathematical properties that enable computationally efficient interpretation of the data. The other two methods rely on the observation that the spatial variation of many real-world materials is stationary, in the sense that it consists of small elements repeating across the surface. By taking advantage of this redundancy, the methods demonstrate high-quality appearance capture from two photographs, and only a single photograph, respectively. The photographs are acquired using a mobile phone camera. The resulting reflectance descriptors faithfully reproduce the appearance of the surface under novel viewing and illumination conditions. We demonstrate state of the art results among approaches with similar hardware complexity. The descriptors captured by the methods are directly usable in computer graphics applications, including games, film, and virtual and augmented reality.Item Data-driven Pixel Filter Aware MIP Maps for SVBRDFs(The Eurographics Association, 2023) Kemppinen, Pauli; Aittala, Miika; Lehtinen, Jaakko; Ritschel, Tobias; Weidlich, AndreaWe propose a data-driven approach for generating MIP map pyramids from SVBRDF parameter maps. We learn a latent material representation where linear image downsampling corresponds to linear prefiltering of surface reflectance. In contrast to prior work, we explicitly model the effect of the antialiasing pixel filter also at the finest resolution. This yields high-quality results even in images that are shaded only once per pixel with no further processing. The SVBRDF maps produced by our method can be used as drop-in replacements within existing rendering systems, and the data-driven nature of our framework makes it possible to change the shading model with little effort. As a proof of concept, we also demonstrate using a shared latent representation for two different shading models, allowing for automatic conversionItem Flexible SVBRDF Capture with a Multi-Image Deep Network(The Eurographics Association and John Wiley & Sons Ltd., 2019) Deschaintre, Valentin; Aittala, Miika; Durand, Fredo; Drettakis, George; Bousseau, Adrien; Boubekeur, Tamy and Sen, PradeepEmpowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization-based approaches. However, a single image is often simply not enough to observe the rich appearance of realworld materials. We present a deep-learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order-independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images - a sweet spot between existing single-image and complex multi-image approaches.Item MesoGAN: Generative Neural Reflectance Shells(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Diolatzis, Stavros; Novak, Jan; Rousselle, Fabrice; Granskog, Jonathan; Aittala, Miika; Ramamoorthi, Ravi; Drettakis, George; Hauser, Helwig and Alliez, PierreWe introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering. The primitive can be applied to surfaces as a neural reflectance shell; a thin volumetric layer above the surface with appearance parameters defined by a neural network. To construct the neural shell, we first generate a 2D feature texture using StyleGAN with carefully randomized Fourier features to support arbitrarily sized textures without repeating artefacts. We augment the 2D feature texture with a learned height feature, which aids the neural field renderer in producing volumetric parameters from the 2D texture. To facilitate filtering, and to enable end‐to‐end training within memory constraints of current hardware, we utilize a hierarchical texturing approach and train our model on multi‐scale synthetic datasets of 3D mesoscale structures. We propose one possible approach for conditioning MesoGAN on artistic parameters (e.g. fibre length, density of strands, lighting direction) and demonstrate and discuss integration into physically based renderers.Item Video-Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs(The Eurographics Association and John Wiley & Sons Ltd., 2021) Thonat, Theo; Aksoy, Yagiz; Aittala, Miika; Paris, Sylvain; Durand, Fredo; Drettakis, George; Bousseau, Adrien and McGuire, MorganImage-Based Rendering allows users to easily capture a scene using a single camera and then navigate freely with realistic results. However, the resulting renderings are completely static, and dynamic effects - such as fire, waterfalls or small waves - cannot be reproduced. We tackle the challenging problem of enabling free-viewpoint navigation including such stationary dynamic effects, but still maintaining the simplicity of casual capture. Using a single camera - instead of previous complex synchronized multi-camera setups - means that we have unsynchronized videos of the dynamic effect from multiple views, making it hard to blend them when synthesizing novel views. We present a solution that allows smooth free-viewpoint video-based rendering (VBR) of such scenes using temporal Laplacian pyramid decomposition video, enabling spatio-temporal blending. For effects such as fire and waterfalls, that are semi-transparent and occupy 3D space, we first estimate their spatial volume. This allows us to create per-video geometries and alpha-matte videos that we can blend using our frequency-dependent method. We also extend Laplacian blending to the temporal dimension to remove additional temporal seams. We show results on scenes containing fire, waterfalls or rippling waves at the seaside, bringing these scenes to life.