Browsing by Author "Guerrero, Paul"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Controlling Material Appearance by Examples(The Eurographics Association and John Wiley & Sons Ltd., 2022) Hu, Yiwei; Hašan, Miloš; Guerrero, Paul; Rushmeier, Holly; Deschaintre, Valentin; Ghosh, Abhijeet; Wei, Li-YiDespite the ubiquitous use of materials maps in modern rendering pipelines, their editing and control remains a challenge. In this paper, we present an example-based material control method to augment input material maps based on user-provided material photos. We train a tileable version of MaterialGAN and leverage its material prior to guide the appearance transfer, optimizing its latent space using differentiable rendering. Our method transfers the micro and meso-structure textures of user provided target(s) photographs, while preserving the structure and quality of the input material. We show our methods can control existing material maps, increasing realism or generating new, visually appealing materials.Item Neurosymbolic Models for Computer Graphics(The Eurographics Association and John Wiley & Sons Ltd., 2023) Ritchie, Daniel; Guerrero, Paul; Jones, R. Kenny; Mitra, Niloy J.; Schulz, Adriana; Willis, Karl D. D.; Wu, Jiajun; Bousseau, Adrien; Theobalt, ChristianProcedural models (i.e. symbolic programs that output visual data) are a historically-popular method for representing graphics content: vegetation, buildings, textures, etc. They offer many advantages: interpretable design parameters, stochastic variations, high-quality outputs, compact representation, and more. But they also have some limitations, such as the difficulty of authoring a procedural model from scratch. More recently, AI-based methods, and especially neural networks, have become popular for creating graphic content. These techniques allow users to directly specify desired properties of the artifact they want to create (via examples, constraints, or objectives), while a search, optimization, or learning algorithm takes care of the details. However, this ease of use comes at a cost, as it's often hard to interpret or manipulate these representations. In this state-of-the-art report, we summarize research on neurosymbolic models in computer graphics: methods that combine the strengths of both AI and symbolic programs to represent, generate, and manipulate visual data. We survey recent work applying these techniques to represent 2D shapes, 3D shapes, and materials & textures. Along the way, we situate each prior work in a unified design space for neurosymbolic models, which helps reveal underexplored areas and opportunities for future research.Item PointCleanNet: Learning to Denoise and Remove Outliers from Dense Point Clouds(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Rakotosaona, Marie‐Julie; La Barbera, Vittorio; Guerrero, Paul; Mitra, Niloy J.; Ovsjanikov, Maks; Benes, Bedrich and Hauser, HelwigPoint clouds obtained with 3D scanners or by image‐based reconstruction techniques are often corrupted with significant amount of noise and outliers. Traditional methods for point cloud denoising largely rely on local surface fitting (e.g. jets or MLS surfaces), local or non‐local averaging or on statistical assumptions about the underlying noise model. In contrast, we develop a simple data‐driven method for removing outliers and reducing noise in unordered point clouds. We base our approach on a deep learning architecture adapted from PCPNet, which was recently proposed for estimating local 3D shape properties in point clouds. Our method first classifies and discards outlier samples, and then estimates correction vectors that project noisy points onto the original clean surfaces. The approach is efficient and robust to varying amounts of noise and outliers, while being able to handle large densely sampled point clouds. In our extensive evaluation, both on synthetic and real data, we show an increased robustness to strong noise levels compared to various state‐of‐the‐art methods, enabling accurate surface reconstruction from extremely noisy real data obtained by range scans. Finally, the simplicity and universality of our approach makes it very easy to integrate in any existing geometry processing pipeline. Both the code and pre‐trained networks can be found on the project page ().Item A Semi‐Procedural Convolutional Material Prior(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Zhou, Xilong; Hašan, Miloš; Deschaintre, Valentin; Guerrero, Paul; Sunkavalli, Kalyan; Kalantari, Nima Khademi; Hauser, Helwig and Alliez, PierreLightweight material capture methods require a material prior, defining the subspace of plausible textures within the large space of unconstrained texel grids. Previous work has either used deep neural networks (trained on large synthetic material datasets) or procedural node graphs (constructed by expert artists) as such priors. In this paper, we propose a semi‐procedural differentiable material prior that represents materials as a set of (typically procedural) grayscale noises and patterns that are processed by a sequence of lightweight learnable convolutional filter operations. We demonstrate that the restricted structure of this architecture acts as an inductive bias on the space of material appearances, allowing us to optimize the weights of the convolutions per‐material, with no need for pre‐training on a large dataset. Combined with a differentiable rendering step and a perceptual loss, we enable single‐image tileable material capture comparable with state of the art. Our approach does not target the pixel‐perfect recovery of the material, but rather uses noises and patterns as input to match the target appearance. To achieve this, it does not require complex procedural graphs, and has a much lower complexity, computational cost and storage cost. We also enable control over the results, through changing the provided patterns and using guide maps to push the material properties towards a user‐driven objective.