Browsing by Author "Hašan, Miloš"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Controlling Material Appearance by Examples(The Eurographics Association and John Wiley & Sons Ltd., 2022) Hu, Yiwei; Hašan, Miloš; Guerrero, Paul; Rushmeier, Holly; Deschaintre, Valentin; Ghosh, Abhijeet; Wei, Li-YiDespite the ubiquitous use of materials maps in modern rendering pipelines, their editing and control remains a challenge. In this paper, we present an example-based material control method to augment input material maps based on user-provided material photos. We train a tileable version of MaterialGAN and leverage its material prior to guide the appearance transfer, optimizing its latent space using differentiable rendering. Our method transfers the micro and meso-structure textures of user provided target(s) photographs, while preserving the structure and quality of the input material. We show our methods can control existing material maps, increasing realism or generating new, visually appealing materials.Item A Semi‐Procedural Convolutional Material Prior(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Zhou, Xilong; Hašan, Miloš; Deschaintre, Valentin; Guerrero, Paul; Sunkavalli, Kalyan; Kalantari, Nima Khademi; Hauser, Helwig and Alliez, PierreLightweight material capture methods require a material prior, defining the subspace of plausible textures within the large space of unconstrained texel grids. Previous work has either used deep neural networks (trained on large synthetic material datasets) or procedural node graphs (constructed by expert artists) as such priors. In this paper, we propose a semi‐procedural differentiable material prior that represents materials as a set of (typically procedural) grayscale noises and patterns that are processed by a sequence of lightweight learnable convolutional filter operations. We demonstrate that the restricted structure of this architecture acts as an inductive bias on the space of material appearances, allowing us to optimize the weights of the convolutions per‐material, with no need for pre‐training on a large dataset. Combined with a differentiable rendering step and a perceptual loss, we enable single‐image tileable material capture comparable with state of the art. Our approach does not target the pixel‐perfect recovery of the material, but rather uses noises and patterns as input to match the target appearance. To achieve this, it does not require complex procedural graphs, and has a much lower complexity, computational cost and storage cost. We also enable control over the results, through changing the provided patterns and using guide maps to push the material properties towards a user‐driven objective.