42-Issue 2

Permanent URI for this collection

Human Object Interaction
IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object Interactions
Anindita Ghosh, Rishabh Dabral, Vladislav Golyanik, Christian Theobalt, and Philipp Slusallek
Online Avatar Motion Adaptation to Morphologically-similar Spaces
Soojin Choi, Seokpyo Hong, Kyungmin Cho, Chaelin Kim, and Junyong Noh
Learning to Transfer In-Hand Manipulations Using a Greedy Shape Curriculum
Yunbo Zhang, Alexander Clegg, Sehoon Ha, Greg Turk, and Yuting Ye
Logos and Clip-Art
Img2Logo: Generating Golden Ratio Logos from Images
Kai-Wen Hsiao, Yong-Liang Yang, Yung-Chih Chiu, Min-Chun Hu, Chih-Yuan Yao, and Hung-Kuo Chu
Interactive Depixelization of Pixel Art through Spring Simulation
Marko Matusovic, Amal Dev Parakkat, and Elmar Eisemann
Subpixel Deblurring of Anti-Aliased Raster Clip-Art
Jinfan Yang, Nicholas Vining, Shakiba Kheradmand, Nathan Carr, Leonid Sigal, and Alla Sheffer
Shape Correspondance
Unsupervised Template Warp Consistency for Implicit Surface Correspondences
Mengya Liu, Ajad Chhatkuli, Janis Postels, Luc Van Gool, and Federico Tombari
Scalable and Efficient Functional Map Computations on Dense Meshes
Robin Magnet and Maks Ovsjanikov
Surface Maps via Adaptive Triangulations
Patrick Schmidt, Dörte Pieper, and Leif Kobbelt
Image and Video Processinng
Video Frame Interpolation for High Dynamic Range Sequences Captured with Dual-exposure Sensors
Ugur Cogalan, Mojtaba Bemana, Hans-Peter Seidel, and Karol Myszkowski
Simulating Analogue Film Damage to Analyse and Improve Artefact Restoration on High-resolution Scans
Daniela Ivanova, John Williamson, and Paul Henderson
Learning Deformations and Fluids
How Will It Drape Like? Capturing Fabric Mechanics from Depth Images
Carlos Rodriguez-Pardo, Melania Prieto-Martín, Dan Casas, and Elena Garces
Physics-Informed Neural Corrector for Deformation-based Fluid Control
Jingwei Tang, Byungsoo Kim, Vinicius C. Azevedo, and Barbara Solenthaler
Reconstruction and Remeshing
Robust Pointset Denoising of Piecewise-Smooth Surfaces through Line Processes
Jiayi Wei, Jiong Chen, Damien Rohmer, Pooran Memari, and Mathieu Desbrun
One Step Further Beyond Trilinear Interpolation and Central Differences: Triquadratic Reconstruction and its Analytic Derivatives at the Cost of One Additional Texture Fetch
Balázs Csébfalvi
BRDFs and Environment Maps
Learning to Learn and Sample BRDFs
Chen Liu, Michael Fischer, and Tobias Ritschel
CubeGAN: Omnidirectional Image Synthesis Using Generative Adversarial Networks
Christopher May and Daniel Aliaga
Simulation: Material Interactions
An Optimization-based SPH Solver for Simulation of Hyperelastic Solids
Min Hyung Kee, Kiwon Um, HyunMo Kang, and JungHyun Han
3D Representation and Acceleration Structures
Editing Compressed High-resolution Voxel Scenes with Attributes
Mathijs Molenaar and Elmar Eisemann
Parallel Transformation of Bounding Volume Hierarchies into Oriented Bounding Box Trees
Nick Vitsas, Iordanis Evangelou, Georgios Papaioannou, and Anastasios Gkaravelis
Stochastic Subsets for BVH Construction
Lorenzo Tessari, Addis Dittebrandt, Michael J. Doyle, and Carsten Benthin
Faces
Face Editing Using Part-Based Optimization of the Latent Space
Mohammad Amin Aliari, Andre Beauchamp, Tiberiu Popa, and Eric Paquette
What's in a Decade? Transforming Faces Through Time
Eric Chen, Jin Sun, Apoorv Khandelwal, Dani Lischinski, Noah Snavely, and Hadar Averbuch-Elor
Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition
Xingchao Yang, Takafumi Taketomi, and Yoshihiro Kanamori
Topological and Geometric Shape Understanding
A Variational Loop Shrinking Analogy for Handle and Tunnel Detection and Reeb Graph Construction on Surfaces
Alexander Weinrauch, Daniel Mlakar, Hans-Peter Seidel, Markus Steinberger, and Rhaleb Zayer
Evolving Guide Subdivision
Kestutis Karciauskas and Jorg Peters
Materials and Textures
In-the-wild Material Appearance Editing using Perceptual Attributes
José Daniel Subías and Manuel Lagunas
Preserving the Autocovariance of Texture Tilings Using Importance Sampling
Nicolas Lutz, Basile Sauvage, and Jean-Michel Dischler
Capturing Human Pose and Appearance
Variational Pose Prediction with Dynamic Sample Selection from Sparse Tracking Signals
Nicholas Milef, Shinjiro Sueda, and Nima Khademi Kalantari
Scene-Aware 3D Multi-Human Motion Capture from a Single Camera
Diogo C. Luvizon, Marc Habermann, Vladislav Golyanik, Adam Kortylewski, and Christian Theobalt
Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks
Sihun Cha, Kwanggyoon Seo, Amirsaman Ashtari, and Junyong Noh
Garment Design
Directionality-Aware Design of Embroidery Patterns
Liu Zhenyuan, Michal Piovarci, Christian Hafner, Raphaël Charrondière, and Bernd Bickel
2D Animation and Interaction
Non-linear Rough 2D Animation using Transient Embeddings
Melvin Even, Pierre Bénard, and Pascal Barla
Interactive Design of 2D Car Profiles with Aerodynamic Feedback
Nicolas Rosset, Guillaume Cordonnier, Régis Duvigneau, and Adrien Bousseau

BibTeX (42-Issue 2)
                
@article{
10.1111:cgf.14773,
journal = {Computer Graphics Forum}, title = {{
EUROGRAPHICS 2023: CGF 42-2 Frontmatter}},
author = {
Myszkowski, Karol
and
Niessner, Matthias
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14773}
}
                
@article{
10.1111:cgf.14739,
journal = {Computer Graphics Forum}, title = {{
IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object Interactions}},
author = {
Ghosh, Anindita
and
Dabral, Rishabh
and
Golyanik, Vladislav
and
Theobalt, Christian
and
Slusallek, Philipp
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14739}
}
                
@article{
10.1111:cgf.14740,
journal = {Computer Graphics Forum}, title = {{
Online Avatar Motion Adaptation to Morphologically-similar Spaces}},
author = {
Choi, Soojin
and
Hong, Seokpyo
and
Cho, Kyungmin
and
Kim, Chaelin
and
Noh, Junyong
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14740}
}
                
@article{
10.1111:cgf.14741,
journal = {Computer Graphics Forum}, title = {{
Learning to Transfer In-Hand Manipulations Using a Greedy Shape Curriculum}},
author = {
Zhang, Yunbo
and
Clegg, Alexander
and
Ha, Sehoon
and
Turk, Greg
and
Ye, Yuting
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14741}
}
                
@article{
10.1111:cgf.14742,
journal = {Computer Graphics Forum}, title = {{
Img2Logo: Generating Golden Ratio Logos from Images}},
author = {
Hsiao, Kai-Wen
and
Yang, Yong-Liang
and
Chiu, Yung-Chih
and
Hu, Min-Chun
and
Yao, Chih-Yuan
and
Chu, Hung-Kuo
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14742}
}
                
@article{
10.1111:cgf.14743,
journal = {Computer Graphics Forum}, title = {{
Interactive Depixelization of Pixel Art through Spring Simulation}},
author = {
Matusovic, Marko
and
Parakkat, Amal Dev
and
Eisemann, Elmar
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14743}
}
                
@article{
10.1111:cgf.14745,
journal = {Computer Graphics Forum}, title = {{
Unsupervised Template Warp Consistency for Implicit Surface Correspondences}},
author = {
Liu, Mengya
and
Chhatkuli, Ajad
and
Postels, Janis
and
Gool, Luc Van
and
Tombari, Federico
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14745}
}
                
@article{
10.1111:cgf.14744,
journal = {Computer Graphics Forum}, title = {{
Subpixel Deblurring of Anti-Aliased Raster Clip-Art}},
author = {
Yang, Jinfan
and
Vining, Nicholas
and
Kheradmand, Shakiba
and
Carr, Nathan
and
Sigal, Leonid
and
Sheffer, Alla
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14744}
}
                
@article{
10.1111:cgf.14746,
journal = {Computer Graphics Forum}, title = {{
Scalable and Efficient Functional Map Computations on Dense Meshes}},
author = {
Magnet, Robin
and
Ovsjanikov, Maks
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14746}
}
                
@article{
10.1111:cgf.14747,
journal = {Computer Graphics Forum}, title = {{
Surface Maps via Adaptive Triangulations}},
author = {
Schmidt, Patrick
and
Pieper, Dörte
and
Kobbelt, Leif
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14747}
}
                
@article{
10.1111:cgf.14748,
journal = {Computer Graphics Forum}, title = {{
Video Frame Interpolation for High Dynamic Range Sequences Captured with Dual-exposure Sensors}},
author = {
Cogalan, Ugur
and
Bemana, Mojtaba
and
Seidel, Hans-Peter
and
Myszkowski, Karol
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14748}
}
                
@article{
10.1111:cgf.14749,
journal = {Computer Graphics Forum}, title = {{
Simulating Analogue Film Damage to Analyse and Improve Artefact Restoration on High-resolution Scans}},
author = {
Ivanova, Daniela
and
Williamson, John
and
Henderson, Paul
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14749}
}
                
@article{
10.1111:cgf.14750,
journal = {Computer Graphics Forum}, title = {{
How Will It Drape Like? Capturing Fabric Mechanics from Depth Images}},
author = {
Rodriguez-Pardo, Carlos
and
Prieto-Martín, Melania
and
Casas, Dan
and
Garces, Elena
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14750}
}
                
@article{
10.1111:cgf.14751,
journal = {Computer Graphics Forum}, title = {{
Physics-Informed Neural Corrector for Deformation-based Fluid Control}},
author = {
Tang, Jingwei
and
Kim, Byungsoo
and
Azevedo, Vinicius C.
and
Solenthaler, Barbara
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14751}
}
                
@article{
10.1111:cgf.14752,
journal = {Computer Graphics Forum}, title = {{
Robust Pointset Denoising of Piecewise-Smooth Surfaces through Line Processes}},
author = {
Wei, Jiayi
and
Chen, Jiong
and
Rohmer, Damien
and
Memari, Pooran
and
Desbrun, Mathieu
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14752}
}
                
@article{
10.1111:cgf.14753,
journal = {Computer Graphics Forum}, title = {{
One Step Further Beyond Trilinear Interpolation and Central Differences: Triquadratic Reconstruction and its Analytic Derivatives at the Cost of One Additional Texture Fetch}},
author = {
Csébfalvi, Balázs
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14753}
}
                
@article{
10.1111:cgf.14754,
journal = {Computer Graphics Forum}, title = {{
Learning to Learn and Sample BRDFs}},
author = {
Liu, Chen
and
Fischer, Michael
and
Ritschel, Tobias
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14754}
}
                
@article{
10.1111:cgf.14755,
journal = {Computer Graphics Forum}, title = {{
CubeGAN: Omnidirectional Image Synthesis Using Generative Adversarial Networks}},
author = {
May, Christopher
and
Aliaga, Daniel
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14755}
}
                
@article{
10.1111:cgf.14756,
journal = {Computer Graphics Forum}, title = {{
An Optimization-based SPH Solver for Simulation of Hyperelastic Solids}},
author = {
Kee, Min Hyung
and
Um, Kiwon
and
Kang, HyunMo
and
Han, JungHyun
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14756}
}
                
@article{
10.1111:cgf.14757,
journal = {Computer Graphics Forum}, title = {{
Editing Compressed High-resolution Voxel Scenes with Attributes}},
author = {
Molenaar, Mathijs
and
Eisemann, Elmar
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14757}
}
                
@article{
10.1111:cgf.14759,
journal = {Computer Graphics Forum}, title = {{
Stochastic Subsets for BVH Construction}},
author = {
Tessari, Lorenzo
and
Dittebrandt, Addis
and
Doyle, Michael J.
and
Benthin, Carsten
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14759}
}
                
@article{
10.1111:cgf.14758,
journal = {Computer Graphics Forum}, title = {{
Parallel Transformation of Bounding Volume Hierarchies into Oriented Bounding Box Trees}},
author = {
Vitsas, Nick
and
Evangelou, Iordanis
and
Papaioannou, Georgios
and
Gkaravelis, Anastasios
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14758}
}
                
@article{
10.1111:cgf.14762,
journal = {Computer Graphics Forum}, title = {{
Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition}},
author = {
Yang, Xingchao
and
Taketomi, Takafumi
and
Kanamori, Yoshihiro
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14762}
}
                
@article{
10.1111:cgf.14760,
journal = {Computer Graphics Forum}, title = {{
Face Editing Using Part-Based Optimization of the Latent Space}},
author = {
Aliari, Mohammad Amin
and
Beauchamp, Andre
and
Popa, Tiberiu
and
Paquette, Eric
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14760}
}
                
@article{
10.1111:cgf.14761,
journal = {Computer Graphics Forum}, title = {{
What's in a Decade? Transforming Faces Through Time}},
author = {
Chen, Eric Ming
and
Sun, Jin
and
Khandelwal, Apoorv
and
Lischinski, Dani
and
Snavely, Noah
and
Averbuch-Elor, Hadar
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14761}
}
                
@article{
10.1111:cgf.14763,
journal = {Computer Graphics Forum}, title = {{
A Variational Loop Shrinking Analogy for Handle and Tunnel Detection and Reeb Graph Construction on Surfaces}},
author = {
Weinrauch, Alexander
and
Mlakar, Daniel
and
Seidel, Hans-Peter
and
Steinberger, Markus
and
Zayer, Rhaleb
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14763}
}
                
@article{
10.1111:cgf.14765,
journal = {Computer Graphics Forum}, title = {{
In-the-wild Material Appearance Editing using Perceptual Attributes}},
author = {
Subías, José Daniel
and
Lagunas, Manuel
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14765}
}
                
@article{
10.1111:cgf.14764,
journal = {Computer Graphics Forum}, title = {{
Evolving Guide Subdivision}},
author = {
Karciauskas, Kestutis
and
Peters, Jorg
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14764}
}
                
@article{
10.1111:cgf.14766,
journal = {Computer Graphics Forum}, title = {{
Preserving the Autocovariance of Texture Tilings Using Importance Sampling}},
author = {
Lutz, Nicolas
and
Sauvage, Basile
and
Dischler, Jean-Michel
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14766}
}
                
@article{
10.1111:cgf.14767,
journal = {Computer Graphics Forum}, title = {{
Variational Pose Prediction with Dynamic Sample Selection from Sparse Tracking Signals}},
author = {
Milef, Nicholas
and
Sueda, Shinjiro
and
Kalantari, Nima Khademi
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14767}
}
                
@article{
10.1111:cgf.14768,
journal = {Computer Graphics Forum}, title = {{
Scene-Aware 3D Multi-Human Motion Capture from a Single Camera}},
author = {
Luvizon, Diogo C.
and
Habermann, Marc
and
Golyanik, Vladislav
and
Kortylewski, Adam
and
Theobalt, Christian
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14768}
}
                
@article{
10.1111:cgf.14769,
journal = {Computer Graphics Forum}, title = {{
Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks}},
author = {
Cha, Sihun
and
Seo, Kwanggyoon
and
Ashtari, Amirsaman
and
Noh, Junyong
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14769}
}
                
@article{
10.1111:cgf.14770,
journal = {Computer Graphics Forum}, title = {{
Directionality-Aware Design of Embroidery Patterns}},
author = {
Zhenyuan, Liu
and
Piovarci, Michal
and
Hafner, Christian
and
Charrondière, Raphaël
and
Bickel, Bernd
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14770}
}
                
@article{
10.1111:cgf.14771,
journal = {Computer Graphics Forum}, title = {{
Non-linear Rough 2D Animation using Transient Embeddings}},
author = {
Even, Melvin
and
Bénard, Pierre
and
Barla, Pascal
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14771}
}
                
@article{
10.1111:cgf.14772,
journal = {Computer Graphics Forum}, title = {{
Interactive Design of 2D Car Profiles with Aerodynamic Feedback}},
author = {
Rosset, Nicolas
and
Cordonnier, Guillaume
and
Duvigneau, Régis
and
Bousseau, Adrien
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14772}
}

Browse

Recent Submissions

Now showing 1 - 35 of 35
  • Item
    EUROGRAPHICS 2023: CGF 42-2 Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Myszkowski, Karol; Niessner, Matthias; Myszkowski, Karol; Niessner, Matthias
  • Item
    IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object Interactions
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Ghosh, Anindita; Dabral, Rishabh; Golyanik, Vladislav; Theobalt, Christian; Slusallek, Philipp; Myszkowski, Karol; Niessner, Matthias
    Can we make virtual characters in a scene interact with their surrounding objects through simple instructions? Is it possible to synthesize such motion plausibly with a diverse set of objects and instructions? Inspired by these questions, we present the first framework to synthesize the full-body motion of virtual human characters performing specified actions with 3D objects placed within their reach. Our system takes textual instructions specifying the objects and the associated 'intentions' of the virtual characters as input and outputs diverse sequences of full-body motions. This contrasts existing works, where full-body action synthesis methods generally do not consider object interactions, and human-object interaction methods focus mainly on synthesizing hand or finger movements for grasping objects. We accomplish our objective by designing an intent-driven fullbody motion generator, which uses a pair of decoupled conditional variational auto-regressors to learn the motion of the body parts in an autoregressive manner. We also optimize the 6-DoF pose of the objects such that they plausibly fit within the hands of the synthesized characters. We compare our proposed method with the existing methods of motion synthesis and establish a new and stronger state-of-the-art for the task of intent-driven motion synthesis.
  • Item
    Online Avatar Motion Adaptation to Morphologically-similar Spaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Choi, Soojin; Hong, Seokpyo; Cho, Kyungmin; Kim, Chaelin; Noh, Junyong; Myszkowski, Karol; Niessner, Matthias
    In avatar-mediated telepresence systems, a similar environment is assumed for involved spaces, so that the avatar in a remote space can imitate the user's motion with proper semantic intention performed in a local space. For example, touching on the desk by the user should be reproduced by the avatar in the remote space to correctly convey the intended meaning. It is unlikely, however, that the two involved physical spaces are exactly the same in terms of the size of the room or the locations of the placed objects. Therefore, a naive mapping of the user's joint motion to the avatar will not create the semantically correct motion of the avatar in relation to the remote environment. Existing studies have addressed the problem of retargeting human motions to an avatar for telepresence applications. Few studies, however, have focused on retargeting continuous full-body motions such as locomotion and object interaction motions in a unified manner. In this paper, we propose a novel motion adaptation method that allows to generate the full-body motions of a human-like avatar on-the-fly in the remote space. The proposed method handles locomotion and object interaction motions as well as smooth transitions between them according to given user actions under the condition of a bijective environment mapping between morphologically-similar spaces. Our experiments show the effectiveness of the proposed method in generating plausible and semantically correct full-body motions of an avatar in room-scale space.
  • Item
    Learning to Transfer In-Hand Manipulations Using a Greedy Shape Curriculum
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhang, Yunbo; Clegg, Alexander; Ha, Sehoon; Turk, Greg; Ye, Yuting; Myszkowski, Karol; Niessner, Matthias
    In-hand object manipulation is challenging to simulate due to complex contact dynamics, non-repetitive finger gaits, and the need to indirectly control unactuated objects. Further adapting a successful manipulation skill to new objects with different shapes and physical properties is a similarly challenging problem. In this work, we show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example via deep reinforcement learning with careful designs of the imitation learning problem. We apply our approach on both single-handed and two-handed dexterous manipulations of diverse object shapes and motions. We then demonstrate further adaptation of the example motion to a more complex shape through curriculum learning on intermediate shapes morphed between the source and target object. While a naive curriculum of progressive morphs often falls short, we propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
  • Item
    Img2Logo: Generating Golden Ratio Logos from Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Hsiao, Kai-Wen; Yang, Yong-Liang; Chiu, Yung-Chih; Hu, Min-Chun; Yao, Chih-Yuan; Chu, Hung-Kuo; Myszkowski, Karol; Niessner, Matthias
    Logos are one of the most important graphic design forms that use an abstracted shape to clearly represent the spirit of a community. Among various styles of abstraction, a particular golden-ratio design is frequently employed by designers to create a concise and regular logo. In this context, designers utilize a set of circular arcs with golden ratios (i.e., all arcs are taken from circles whose radii form a geometric series based on the golden ratio) as the design elements to manually approximate a target shape. This error-prone process requires a large amount of time and effort, posing a significant challenge for design space exploration. In this work, we present a novel computational framework that can automatically generate golden ratio logo abstractions from an input image. Our framework is based on a set of carefully identified design principles and a constrained optimization formulation respecting these principles. We also propose a progressive approach that can efficiently solve the optimization problem, resulting in a sequence of abstractions that approximate the input at decreasing levels of detail. We evaluate our work by testing on images with different formats including real photos, clip arts, and line drawings. We also extensively validate the key components and compare our results with manual results by designers to demonstrate the effectiveness of our framework. Moreover, our framework can largely benefit design space exploration via easy specification of design parameters such as abstraction levels, golden circle sizes, etc.
  • Item
    Interactive Depixelization of Pixel Art through Spring Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Matusovic, Marko; Parakkat, Amal Dev; Eisemann, Elmar; Myszkowski, Karol; Niessner, Matthias
    We introduce an approach for converting pixel art into high-quality vector images. While much progress has been made on automatic conversion, there is an inherent ambiguity in pixel art, which can lead to a mismatch with the artist's original intent. Further, there is room for incorporating aesthetic preferences during the conversion. In consequence, this work introduces an interactive framework to enable users to guide the conversion process towards high-quality vector illustrations. A key idea of the method is to cast the conversion process into a spring-system optimization that can be influenced by the user. Hereby, it is possible to resolve various ambiguities that cannot be handled by an automatic algorithm.
  • Item
    Unsupervised Template Warp Consistency for Implicit Surface Correspondences
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Liu, Mengya; Chhatkuli, Ajad; Postels, Janis; Gool, Luc Van; Tombari, Federico; Myszkowski, Karol; Niessner, Matthias
    Unsupervised template discovery via implicit representation in a category of shapes has recently shown strong performance. At the core, such methods deform input shapes to a common template space which allows establishing correspondences as well as implicit representation of the shapes. In this work we investigate the inherent assumption that the implicit neural field optimization naturally leads to consistently warped shapes, thus providing both good shape reconstruction and correspondences. Contrary to this convenient assumption, in practice we observe that such is not the case, consequently resulting in sub-optimal point correspondences. In order to solve the problem, we re-visit the warp design and more importantly introduce explicit constraints using unsupervised sparse point predictions, directly encouraging consistency of the warped shapes. We use the unsupervised sparse keypoints in order to further condition the deformation warp and enforce the consistency of the deformation warp. Experiments in dynamic non-rigid DFaust and ShapeNet categories show that our problem identification and solution provide the new state-of-the-art in unsupervised dense correspondences.
  • Item
    Subpixel Deblurring of Anti-Aliased Raster Clip-Art
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Yang, Jinfan; Vining, Nicholas; Kheradmand, Shakiba; Carr, Nathan; Sigal, Leonid; Sheffer, Alla; Myszkowski, Karol; Niessner, Matthias
    Artist generated clip-art images typically consist of a small number of distinct, uniformly colored regions with clear boundaries. Legacy artist created images are often stored in low-resolution (100x100px or less) anti-aliased raster form. Compared to anti-aliasing free rasterization, anti-aliasing blurs inter-region boundaries and obscures the artist's intended region topology and color palette; at the same time, it better preserves subpixel details. Recovering the underlying artist-intended images from their low-resolution anti-aliased rasterizations can facilitate resolution independent rendering, lossless vectorization, and other image processing applications. Unfortunately, while human observers can mentally deblur these low-resolution images and reconstruct region topology, color and subpixel details, existing algorithms applicable to this task fail to produce outputs consistent with human expectations when presented with such images. We recover these viewer perceived blur-free images at subpixel resolution, producing outputs where each input pixel is replaced by four corresponding (sub)pixels. Performing this task requires computing the size of the output image color palette, generating the palette itself, and associating each pixel in the output with one of the colors in the palette. We obtain these desired output components by leveraging a combination of perceptual and domain priors, and real world data. We use readily available data to train a network that predicts, for each antialiased image, a low-blur approximation of the blur-free double-resolution outputs we seek. The images obtained at this stage are perceptually closer to the desired outputs but typically still have hundreds of redundant differently colored regions with fuzzy boundaries. We convert these low-blur intermediate images into blur-free outputs consistent with viewer expectations using a discrete partitioning procedure guided by the characteristic properties of clip-art images, observations about the antialiasing process, and human perception of anti-aliased clip-art. This step dramatically reduces the size of the output color palettes, and the region counts bringing them in line with viewer expectations and enabling the image processing applications we target. We demonstrate the utility of our method by using our outputs for a number of image processing tasks, and validate it via extensive comparisons to prior art. In our comparative study, participants preferred our deblurred outputs over those produced by the best-performing alternative by a ratio of 75 to 8.5.
  • Item
    Scalable and Efficient Functional Map Computations on Dense Meshes
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Magnet, Robin; Ovsjanikov, Maks; Myszkowski, Karol; Niessner, Matthias
    We propose a new scalable version of the functional map pipeline that allows to efficiently compute correspondences between potentially very dense meshes. Unlike existing approaches that process dense meshes by relying on ad-hoc mesh simplification, we establish an integrated end-to-end pipeline with theoretical approximation analysis. In particular, our method overcomes the computational burden of both computing the basis, as well the functional and pointwise correspondence computation by approximating the functional spaces and the functional map itself. Errors in the approximations are controlled by theoretical upper bounds assessing the range of applicability of our pipeline.With this construction in hand, we propose a scalable practical algorithm and demonstrate results on dense meshes, which approximate those obtained by standard functional map algorithms at the fraction of the computation time. Moreover, our approach outperforms the standard acceleration procedures by a large margin, leading to accurate results even in challenging cases.
  • Item
    Surface Maps via Adaptive Triangulations
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Schmidt, Patrick; Pieper, Dörte; Kobbelt, Leif; Myszkowski, Karol; Niessner, Matthias
    We present a new method to compute continuous and bijective maps (surface homeomorphisms) between two or more genus-0 triangle meshes. In contrast to previous approaches, we decouple the resolution at which a map is represented from the resolution of the input meshes. We discretize maps via common triangulations that approximate the input meshes while remaining in bijective correspondence to them. Both the geometry and the connectivity of these triangulations are optimized with respect to a single objective function that simultaneously controls mapping distortion, triangulation quality, and approximation error. A discrete-continuous optimization algorithm performs both energy-based remeshing as well as global second-order optimization of vertex positions, parametrized via the sphere. With this, we combine the disciplines of compatible remeshing and surface map optimization in a unified formulation and make a contribution in both fields. While existing compatible remeshing algorithms often operate on a fixed pre-computed surface map, we can now globally update this correspondence during remeshing. On the other hand, bijective surface-to-surface map optimization previously required computing costly overlay meshes that are inherently tied to the input mesh resolution. We achieve significant complexity reduction by instead assessing distortion between the approximating triangulations. This new map representation is inherently more robust than previous overlay-based approaches, is less intricate to implement, and naturally supports mapping between more than two surfaces. Moreover, it enables adaptive multi-resolution schemes that, e.g., first align corresponding surface regions at coarse resolutions before refining the map where needed. We demonstrate significant speedups and increased flexibility over state-of-the art mapping algorithms at similar map quality, and also provide a reference implementation of the method.
  • Item
    Video Frame Interpolation for High Dynamic Range Sequences Captured with Dual-exposure Sensors
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Cogalan, Ugur; Bemana, Mojtaba; Seidel, Hans-Peter; Myszkowski, Karol; Myszkowski, Karol; Niessner, Matthias
    Video frame interpolation (VFI) enables many important applications such as slow motion playback and frame rate conversion. However, one major challenge in using VFI is accurately handling high dynamic range (HDR) scenes with complex motion. To this end, we explore the possible advantages of dual-exposure sensors that readily provide sharp short and blurry long exposures that are spatially registered and whose ends are temporally aligned. This way, motion blur registers temporally continuous information on the scene motion that, combined with the sharp reference, enables more precise motion sampling within a single camera shot. We demonstrate that this facilitates a more complex motion reconstruction in the VFI task, as well as HDR frame reconstruction that so far has been considered only for the originally captured frames, not in-between interpolated frames. We design a neural network trained in these tasks that clearly outperforms existing solutions. We also propose a metric for scene motion complexity that provides important insights into the performance of VFI methods at test time.
  • Item
    Simulating Analogue Film Damage to Analyse and Improve Artefact Restoration on High-resolution Scans
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Ivanova, Daniela; Williamson, John; Henderson, Paul; Myszkowski, Karol; Niessner, Matthias
    Digital scans of analogue photographic film typically contain artefacts such as dust and scratches. Automated removal of these is an important part of preservation and dissemination of photographs of historical and cultural importance. While state-of-the-art deep learning models have shown impressive results in general image inpainting and denoising, film artefact removal is an understudied problem. It has particularly challenging requirements, due to the complex nature of analogue damage, the high resolution of film scans, and potential ambiguities in the restoration. There are no publicly available highquality datasets of real-world analogue film damage for training and evaluation, making quantitative studies impossible. We address the lack of ground-truth data for evaluation by collecting a dataset of 4K damaged analogue film scans paired with manually-restored versions produced by a human expert, allowing quantitative evaluation of restoration performance. We have made the dataset available at https://doi.org/10.6084/m9.figshare.21803304. We construct a larger synthetic dataset of damaged images with paired clean versions using a statistical model of artefact shape and occurrence learnt from real, heavily-damaged images. We carefully validate the realism of the simulated damage via a human perceptual study, showing that even expert users find our synthetic damage indistinguishable from real. In addition, we demonstrate that training with our synthetically damaged dataset leads to improved artefact segmentation performance when compared to previously proposed synthetic analogue damage overlays. The synthetically damaged dataset can be found at https://doi.org/10.6084/m9. figshare.21815844, and the annotated authentic artefacts along with the resulting statistical damage model at https:// github.com/daniela997/FilmDamageSimulator. Finally, we use these datasets to train and analyse the performance of eight state-of-the-art image restoration methods on high-resolution scans. We compare both methods which directly perform the restoration task on scans with artefacts, and methods which require a damage mask to be provided for the inpainting of artefacts. We modify the methods to process the inputs in a patch-wise fashion to operate on original high resolution film scans.
  • Item
    How Will It Drape Like? Capturing Fabric Mechanics from Depth Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Rodriguez-Pardo, Carlos; Prieto-Martín, Melania; Casas, Dan; Garces, Elena; Myszkowski, Karol; Niessner, Matthias
    We propose a method to estimate the mechanical parameters of fabrics using a casual capture setup with a depth camera. Our approach enables to create mechanically-correct digital representations of real-world textile materials, which is a fundamental step for many interactive design and engineering applications. As opposed to existing capture methods, which typically require expensive setups, video sequences, or manual intervention, our solution can capture at scale, is agnostic to the optical appearance of the textile, and facilitates fabric arrangement by non-expert operators. To this end, we propose a sim-to-real strategy to train a learning-based framework that can take as input one or multiple images and outputs a full set of mechanical parameters. Thanks to carefully designed data augmentation and transfer learning protocols, our solution generalizes to real images despite being trained only on synthetic data, hence successfully closing the sim-to-real loop. Key in our work is to demonstrate that evaluating the regression accuracy based on the similarity at parameter space leads to an inaccurate distances that do not match the human perception. To overcome this, we propose a novel metric for fabric drape similarity that operates on the image domain instead on the parameter space, allowing us to evaluate our estimation within the context of a similarity rank. We show that out metric correlates with human judgments about the perception of drape similarity, and that our model predictions produce perceptually accurate results compared to the ground truth parameters.
  • Item
    Physics-Informed Neural Corrector for Deformation-based Fluid Control
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Tang, Jingwei; Kim, Byungsoo; Azevedo, Vinicius C.; Solenthaler, Barbara; Myszkowski, Karol; Niessner, Matthias
    Controlling fluid simulations is notoriously difficult due to its high computational cost and the fact that user control inputs can cause unphysical motion. We present an interactive method for deformation-based fluid control. Our method aims at balancing the direct deformations of fluid fields and the preservation of physical characteristics. We train convolutional neural networks with physics-inspired loss functions together with a differentiable fluid simulator, and provide an efficient workflow for flow manipulations at test time. We demonstrate diverse test cases to analyze our carefully designed objectives and show that they lead to physical and eventually visually appealing modifications on edited fluid data.
  • Item
    Robust Pointset Denoising of Piecewise-Smooth Surfaces through Line Processes
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Wei, Jiayi; Chen, Jiong; Rohmer, Damien; Memari, Pooran; Desbrun, Mathieu; Myszkowski, Karol; Niessner, Matthias
    Denoising is a common, yet critical operation in geometry processing aiming at recovering high-fidelity models of piecewisesmooth objects from noise-corrupted pointsets. Despite a sizable literature on the topic, there is a dearth of approaches capable of processing very noisy and outlier-ridden input pointsets for which no normal estimates and no assumptions on the underlying geometric features or noise type are provided. In this paper, we propose a new robust-statistics approach to denoising pointsets based on line processes to offer robustness to noise and outliers while preserving sharp features possibly present in the data. While the use of robust statistics in denoising is hardly new, most approaches rely on prescribed filtering using data-independent blending expressions based on the spatial and normal closeness of samples. Instead, our approach deduces a geometric denoising strategy through robust and regularized tangent plane fitting of the initial pointset, obtained numerically via alternating minimizations for efficiency and reliability. Key to our variational approach is the use of line processes to identify inliers vs. outliers, as well as the presence of sharp features. We demonstrate that our method can denoise sampled piecewise-smooth surfaces for levels of noise and outliers at which previous works fall short.
  • Item
    One Step Further Beyond Trilinear Interpolation and Central Differences: Triquadratic Reconstruction and its Analytic Derivatives at the Cost of One Additional Texture Fetch
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Csébfalvi, Balázs; Myszkowski, Karol; Niessner, Matthias
    Recently, it has been shown that the quality of GPU-based trilinear volume resampling can be significantly improved if the six additional trilinear samples evaluated for the gradient estimation also contribute to the reconstruction of the underlying function [Csé19]. Although this improvement increases the approximation order from two to three without any extra cost, the continuity order remains C0. In this paper, we go one step further showing that a C1 continuous triquadratic B-spline reconstruction and its analytic partial derivatives can be evaluated by taking only one more trilinear sample into account. Thus, our method is the first volume-resampling technique that is nearly as fast as trilinear interpolation combined with on-thefly central differencing, but provides a higher-quality reconstruction together with a consistent analytic gradient calculation. Furthermore, we show that our fast evaluation scheme can also be adapted to the Mitchell-Netravali [MN88] notch filter, for which a fast GPU implementation has not been known so far.
  • Item
    Learning to Learn and Sample BRDFs
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Liu, Chen; Fischer, Michael; Ritschel, Tobias; Myszkowski, Karol; Niessner, Matthias
    We propose a method to accelerate the joint process of physically acquiring and learning neural Bi-directional Reflectance Distribution Function (BRDF) models. While BRDF learning alone can be accelerated by meta-learning, acquisition remains slow as it relies on a mechanical process. We show that meta-learning can be extended to optimize the physical sampling pattern, too. After our method has been meta-trained for a set of fully-sampled BRDFs, it is able to quickly train on new BRDFs with up to five orders of magnitude fewer physical acquisition samples at similar quality. Our approach also extends to other linear and non-linear BRDF models, which we show in an extensive evaluation.
  • Item
    CubeGAN: Omnidirectional Image Synthesis Using Generative Adversarial Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) May, Christopher; Aliaga, Daniel; Myszkowski, Karol; Niessner, Matthias
    We propose a framework to create projectively-correct and seam-free cube-map images using generative adversarial learning. Deep generation of cube-maps that contain the correct projection of the environment onto its faces is not straightforward as has been recognized in prior work. Our approach extends an existing framework, StyleGAN3, to produce cube-maps instead of planar images. In addition to reshaping the output, we include a cube-specific volumetric initialization component, a projective resampling component, and a modification of augmentation operations to the spherical domain. Our results demonstrate the network's generation capabilities trained on imagery from various 3D environments. Additionally, we show the power and quality of our GAN design in an inversion task, combined with navigation capabilities, to perform novel view synthesis.
  • Item
    An Optimization-based SPH Solver for Simulation of Hyperelastic Solids
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Kee, Min Hyung; Um, Kiwon; Kang, HyunMo; Han, JungHyun; Myszkowski, Karol; Niessner, Matthias
    This paper proposes a novel method for simulating hyperelastic solids with Smoothed Particle Hydrodynamics (SPH). The proposed method extends the coverage of the state-of-the-art elastic SPH solid method to include different types of hyperelastic materials, such as the Neo-Hookean and the St. Venant-Kirchoff models. To this end, we reformulate an implicit integration scheme for SPH elastic solids into an optimization problem and solve the problem using a general-purpose quasi-Newton method. Our experiments show that the Limited-memory BFGS (L-BFGS) algorithm can be employed to efficiently solve our optimization problem in the SPH framework and demonstrate its stable and efficient simulations for complex materials in the SPH framework. Thanks to the nature of our unified representation for both solids and fluids, the SPH formulation simplifies coupling between different materials and handling collisions.
  • Item
    Editing Compressed High-resolution Voxel Scenes with Attributes
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Molenaar, Mathijs; Eisemann, Elmar; Myszkowski, Karol; Niessner, Matthias
    Sparse Voxel Directed Acyclic Graphs (SVDAGs) are an efficient solution for storing high-resolution voxel geometry. Recently, algorithms for the interactive modification of SVDAGs have been proposed that maintain the compressed geometric representation. Nevertheless, voxel attributes, such as colours, require an uncompressed storage, which can result in high memory usage over the course of the application. The reason is the high cost of existing attribute-compression schemes which remain unfit for interactive applications. In this paper, we introduce two attribute compression methods (lossless and lossy), which enable the interactive editing of compressed high-resolution voxel scenes including attributes.
  • Item
    Stochastic Subsets for BVH Construction
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Tessari, Lorenzo; Dittebrandt, Addis; Doyle, Michael J.; Benthin, Carsten; Myszkowski, Karol; Niessner, Matthias
    BVH construction is a critical component of real-time and interactive ray-tracing systems. However, BVH construction can be both compute and bandwidth intensive, especially when a large degree of dynamic geometry is present. Different build algorithms vary substantially in the traversal performance that they produce, making high quality construction algorithms desirable. However, high quality algorithms, such as top-down construction, are typically more expensive, limiting their benefit in real-time and interactive contexts. One particular challenge of high quality top-down construction algorithms is that the large working set at the top of the tree can make constructing these levels bandwidth-intensive, due to O(nlog(n)) complexity, limited cache locality, and less dense compute at these levels. To address this limitation, we propose a novel stochastic approach to GPU BVH construction that selects a representative subset to build the upper levels of the tree. As a second pass, the remaining primitives are clustered around the BVH leaves and further processed into a complete BVH. We show that our novel approach significantly reduces the construction time of top-down GPU BVH builders by a factor up to 1.8x, while achieving competitive rendering performance in most cases, and exceeding the performance in others.
  • Item
    Parallel Transformation of Bounding Volume Hierarchies into Oriented Bounding Box Trees
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Vitsas, Nick; Evangelou, Iordanis; Papaioannou, Georgios; Gkaravelis, Anastasios; Myszkowski, Karol; Niessner, Matthias
    Oriented bounding box (OBB) hierarchies can be used instead of hierarchies based on axis-aligned bounding boxes (AABB), providing tighter fitting to the underlying geometric structures and resulting in improved interference tests, such as ray-geometry intersections. In this paper, we present a method for the fast, parallel transformation of an existing bounding volume hierarchy (BVH), based on AABBs, into a hierarchy based on oriented bounding boxes. To this end, we parallelise a high-quality OBB extraction algorithm from the literature to operate as a standalone OBB estimator and further extend it to efficiently build an OBB hierarchy in a bottom up manner. This agglomerative approach allows for fast parallel execution and the formation of arbitrary, high-quality OBBs in bounding volume hierarchies. The method is fully implemented on the GPU and extensively evaluated with ray intersections.
  • Item
    Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Yang, Xingchao; Taketomi, Takafumi; Kanamori, Yoshihiro; Myszkowski, Karol; Niessner, Matthias
    Facial makeup enriches the beauty of not only real humans but also virtual characters; therefore, makeup for 3D facial models is highly in demand in productions. However, painting directly on 3D faces and capturing real-world makeup are costly, and extracting makeup from 2D images often struggles with shading effects and occlusions. This paper presents the first method for extracting makeup for 3D facial models from a single makeup portrait. Our method consists of the following three steps. First, we exploit the strong prior of 3D morphable models via regression-based inverse rendering to extract coarse materials such as geometry and diffuse/specular albedos that are represented in the UV space. Second, we refine the coarse materials, which may have missing pixels due to occlusions. We apply inpainting and optimization. Finally, we extract the bare skin, makeup, and an alpha matte from the diffuse albedo. Our method offers various applications for not only 3D facial models but also 2D portrait images. The extracted makeup is well-aligned in the UV space, from which we build a large-scale makeup dataset and a parametric makeup model for 3D faces. Our disentangled materials also yield robust makeup transfer and illumination-aware makeup interpolation/removal without a reference image.
  • Item
    Face Editing Using Part-Based Optimization of the Latent Space
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Aliari, Mohammad Amin; Beauchamp, Andre; Popa, Tiberiu; Paquette, Eric; Myszkowski, Karol; Niessner, Matthias
    We propose an approach for interactive 3D face editing based on deep generative models. Most of the current face modeling methods rely on linear methods and cannot express complex and non-linear deformations. In contrast to 3D morphable face models based on Principal Component Analysis (PCA), we introduce a novel architecture based on variational autoencoders. Our architecture has multiple encoders (one for each part of the face, such as the nose and mouth) which feed a single decoder. As a result, each sub-vector of the latent vector represents one part. We train our model with a novel loss function that further disentangles the space based on different parts of the face. The output of the network is a whole 3D face. Hence, unlike partbased PCA methods, our model learns to merge the parts intrinsically and does not require an additional merging process. To achieve interactive face modeling, we optimize for the latent variables given vertex positional constraints provided by a user. To avoid unwanted global changes elsewhere on the face, we only optimize the subset of the latent vector that corresponds to the part of the face being modified. Our editing optimization converges in less than a second. Our results show that the proposed approach supports a broader range of editing constraints and generates more realistic 3D faces.
  • Item
    What's in a Decade? Transforming Faces Through Time
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Chen, Eric Ming; Sun, Jin; Khandelwal, Apoorv; Lischinski, Dani; Snavely, Noah; Averbuch-Elor, Hadar; Myszkowski, Karol; Niessner, Matthias
    How can one visually characterize photographs of people over time? In this work, we describe the Faces Through Time dataset, which contains over a thousand portrait images per decade from the 1880s to the present day. Using our new dataset, we devise a framework for resynthesizing portrait images across time, imagining how a portrait taken during a particular decade might have looked like had it been taken in other decades. Our framework optimizes a family of per-decade generators that reveal subtle changes that differentiate decades-such as different hairstyles or makeup-while maintaining the identity of the input portrait. Experiments show that our method can more effectively resynthesizing portraits across time compared to state-of-theart image-to-image translation methods, as well as attribute-based and language-guided portrait editing models. Our code and data will be available at facesthroughtime.github.io.
  • Item
    A Variational Loop Shrinking Analogy for Handle and Tunnel Detection and Reeb Graph Construction on Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Weinrauch, Alexander; Mlakar, Daniel; Seidel, Hans-Peter; Steinberger, Markus; Zayer, Rhaleb; Myszkowski, Karol; Niessner, Matthias
    The humble loop shrinking property played a central role in the inception of modern topology but it has been eclipsed by more abstract algebraic formalisms. This is particularly true in the context of detecting relevant non-contractible loops on surfaces where elaborate homological and/or graph theoretical constructs are favored in algorithmic solutions. In this work, we devise a variational analogy to the loop shrinking property and show that it yields a simple, intuitive, yet powerful solution allowing a streamlined treatment of the problem of handle and tunnel loop detection. Our formalization tracks the evolution of a diffusion front randomly initiated on a single location on the surface. Capitalizing on a diffuse interface representation combined with a set of rules for concurrent front interactions, we develop a dynamic data structure for tracking the evolution on the surface encoded as a sparse matrix which serves for performing both diffusion numerics and loop detection and acts as the workhorse of our fully parallel implementation. The substantiated results suggest our approach outperforms state of the art and robustly copes with highly detailed geometric models. As a byproduct, our approach can be used to construct Reeb graphs by diffusion thus avoiding commonly encountered issues when using Morse functions.
  • Item
    In-the-wild Material Appearance Editing using Perceptual Attributes
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Subías, José Daniel; Lagunas, Manuel; Myszkowski, Karol; Niessner, Matthias
    Intuitively editing the appearance of materials from a single image is a challenging task given the complexity of the interactions between light and matter, and the ambivalence of human perception. This problem has been traditionally addressed by estimating additional factors of the scene like geometry or illumination, thus solving an inverse rendering problem and subduing the final quality of the results to the quality of these estimations. We present a single-image appearance editing framework that allows us to intuitively modify the material appearance of an object by increasing or decreasing high-level perceptual attributes describing such appearance (e.g., glossy or metallic). Our framework takes as input an in-the-wild image of a single object, where geometry, material, and illumination are not controlled, and inverse rendering is not required. We rely on generative models and devise a novel architecture with Selective Transfer Unit (STU) cells that allow to preserve the high-frequency details from the input image in the edited one. To train our framework we leverage a dataset with pairs of synthetic images rendered with physically-based algorithms, and the corresponding crowd-sourced ratings of high-level perceptual attributes. We show that our material editing framework outperforms the state of the art, and showcase its applicability on synthetic images, in-the-wild real-world photographs, and video sequences.
  • Item
    Evolving Guide Subdivision
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Karciauskas, Kestutis; Peters, Jorg; Myszkowski, Karol; Niessner, Matthias
    To overcome the well-known shape deficiencies of bi-cubic subdivision surfaces, Evolving Guide subdivision (EG subdivision) generalizes C2 bi-quartic (bi-4) splines that approximate a sequence of piecewise polynomial surface pieces near extraordinary points. Unlike guided subdivision, which achieves good shape by following a guide surface in a two-stage, geometry-dependent process, EG subdivision is defined by five new explicit subdivision rules. While formally only C1 at extraordinary points, EG subdivision applied to an obstacle course of inputs generates surfaces without the oscillations and pinched highlight lines typical for Catmull-Clark subdivision. EG subdivision surfaces join C2 with bi-3 surface pieces obtained by interpreting regular sub-nets as bi-cubic tensor-product splines and C2 with adjacent EG surfaces. The EG subdivision control net surrounding an extraordinary node can have the same structure as Catmull-Clark subdivision: two rings of 4-sided facets around each extraordinary nodes so that extraordinary nodes are separated by at least one regular node.
  • Item
    Preserving the Autocovariance of Texture Tilings Using Importance Sampling
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Lutz, Nicolas; Sauvage, Basile; Dischler, Jean-Michel; Myszkowski, Karol; Niessner, Matthias
    By-example aperiodic tilings are popular texture synthesis techniques that allow a fast, on-the-fly generation of unbounded and non-periodic textures with an appearance matching an arbitrary input sample called the ''exemplar''. But by relying on uniform random sampling, these algorithms fail to preserve the autocovariance function, resulting in correlations that do not match the ones in the exemplar. The output can then be perceived as excessively random. In this work, we present a new method which can well preserve the autocovariance function of the exemplar. It consists in fetching contents with an importance sampler taking the explicit autocovariance function as the probability density function (pdf) of the sampler. Our method can be controlled for increasing or decreasing the randomness aspect of the texture. Besides significantly improving synthesis quality for classes of textures characterized by pronounced autocovariance functions, we moreover propose a real-time tiling and blending scheme that permits the generation of high-quality textures faster than former algorithms with minimal downsides by reducing the number of texture fetches.
  • Item
    Variational Pose Prediction with Dynamic Sample Selection from Sparse Tracking Signals
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Milef, Nicholas; Sueda, Shinjiro; Kalantari, Nima Khademi; Myszkowski, Karol; Niessner, Matthias
    We propose a learning-based approach for full-body pose reconstruction from extremely sparse upper body tracking data, obtained from a virtual reality (VR) device. We leverage a conditional variational autoencoder with gated recurrent units to synthesize plausible and temporally coherent motions from 4-point tracking (head, hands, and waist positions and orientations). To avoid synthesizing implausible poses, we propose a novel sample selection and interpolation strategy along with an anomaly detection algorithm. Specifically, we monitor the quality of our generated poses using the anomaly detection algorithm and smoothly transition to better samples when the quality falls below a statistically defined threshold. Moreover, we demonstrate that our sample selection and interpolation method can be used for other applications, such as target hitting and collision avoidance, where the generated motions should adhere to the constraints of the virtual environment. Our system is lightweight, operates in real-time, and is able to produce temporally coherent and realistic motions.
  • Item
    Scene-Aware 3D Multi-Human Motion Capture from a Single Camera
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Luvizon, Diogo C.; Habermann, Marc; Golyanik, Vladislav; Kortylewski, Adam; Theobalt, Christian; Myszkowski, Karol; Niessner, Matthias
    In this work, we consider the problem of estimating the 3D position of multiple humans in a scene as well as their body shape and articulation from a single RGB video recorded with a static camera. In contrast to expensive marker-based or multi-view systems, our lightweight setup is ideal for private users as it enables an affordable 3D motion capture that is easy to install and does not require expert knowledge. To deal with this challenging setting, we leverage recent advances in computer vision using large-scale pre-trained models for a variety of modalities, including 2D body joints, joint angles, normalized disparity maps, and human segmentation masks. Thus, we introduce the first non-linear optimization-based approach that jointly solves for the 3D position of each human, their articulated pose, their individual shapes as well as the scale of the scene. In particular, we estimate the scene depth and person scale from normalized disparity predictions using the 2D body joints and joint angles. Given the per-frame scene depth, we reconstruct a point-cloud of the static scene in 3D space. Finally, given the per-frame 3D estimates of the humans and scene point-cloud, we perform a space-time coherent optimization over the video to ensure temporal, spatial and physical plausibility. We evaluate our method on established multi-person 3D human pose benchmarks where we consistently outperform previous methods and we qualitatively demonstrate that our method is robust to in-thewild conditions including challenging scenes with people of different sizes. Code: https://github.com/dluvizon/ scene-aware-3d-multi-human
  • Item
    Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Cha, Sihun; Seo, Kwanggyoon; Ashtari, Amirsaman; Noh, Junyong; Myszkowski, Karol; Niessner, Matthias
    There has been significant progress in generating an animatable 3D human avatar from a single image. However, recovering texture for the 3D human avatar from a single image has been relatively less addressed. Because the generated 3D human avatar reveals the occluded texture of the given image as it moves, it is critical to synthesize the occluded texture pattern that is unseen from the source image. To generate a plausible texture map for 3D human avatars, the occluded texture pattern needs to be synthesized with respect to the visible texture from the given image. Moreover, the generated texture should align with the surface of the target 3D mesh. In this paper, we propose a texture synthesis method for a 3D human avatar that incorporates geometry information. The proposed method consists of two convolutional networks for the sampling and refining process. The sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh using the geometry information. The sampled texture is further refined and adjusted by the refiner network. To maintain the clear details in the given image, both sampled and refined texture is blended to produce the final texture map. To effectively guide the sampler network to achieve its goal, we designed a curriculum learning scheme that starts from a simple sampling task and gradually progresses to the task where the alignment needs to be considered. We conducted experiments to show that our method outperforms previous methods qualitatively and quantitatively.
  • Item
    Directionality-Aware Design of Embroidery Patterns
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhenyuan, Liu; Piovarci, Michal; Hafner, Christian; Charrondière, Raphaël; Bickel, Bernd; Myszkowski, Karol; Niessner, Matthias
    Embroidery is a long-standing and high-quality approach to making logos and images on textiles. Nowadays, it can also be performed via automated machines that weave threads with high spatial accuracy. A characteristic feature of the appearance of the threads is a high degree of anisotropy. The anisotropic behavior is caused by depositing thin but long strings of thread. As a result, the stitched patterns convey both color and direction. Artists leverage this anisotropic behavior to enhance pure color images with textures, illusions of motion, or depth cues. However, designing colorful embroidery patterns with prescribed directionality is a challenging task, one usually requiring an expert designer. In this work, we propose an interactive algorithm that generates machine-fabricable embroidery patterns from multi-chromatic images equipped with user-specified directionality fields.We cast the problem of finding a stitching pattern into vector theory. To find a suitable stitching pattern, we extract sources and sinks from the divergence field of the vector field extracted from the input and use them to trace streamlines. We further optimize the streamlines to guarantee a smooth and connected stitching pattern. The generated patterns approximate the color distribution constrained by the directionality field. To allow for further artistic control, the trade-off between color match and directionality match can be interactively explored via an intuitive slider. We showcase our approach by fabricating several embroidery paths.
  • Item
    Non-linear Rough 2D Animation using Transient Embeddings
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Even, Melvin; Bénard, Pierre; Barla, Pascal; Myszkowski, Karol; Niessner, Matthias
    Traditional 2D animation requires time and dedication since tens of thousands of frames need to be drawn by hand for a typical production. Many computer-assisted methods have been proposed to automatize the generation of inbetween frames from a set of clean line drawings, but they are all limited by a rigid workflow and a lack of artistic controls, which is in the most part due to the one-to-one stroke matching and interpolation problems they attempt to solve. In this work, we take a novel view on those problems by focusing on an earlier phase of the animation process that uses rough drawings (i.e., sketches). Our key idea is to recast the matching and interpolation problems so that they apply to transient embeddings, which are groups of strokes that only exist for a few keyframes. A transient embedding carries strokes between keyframes both forward and backward in time through a sequence of transformed lattices. Forward and backward strokes are then cross-faded using their thickness to yield rough inbetweens. With our approach, complex topological changes may be introduced while preserving visual motion continuity. As demonstrated on state-of-the-art 2D animation exercises, our system provides unprecedented artistic control through the non-linear exploration of movements and dynamics in real-time.
  • Item
    Interactive Design of 2D Car Profiles with Aerodynamic Feedback
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Rosset, Nicolas; Cordonnier, Guillaume; Duvigneau, Régis; Bousseau, Adrien; Myszkowski, Karol; Niessner, Matthias
    The design of car shapes requires a delicate balance between aesthetic and performance. While fluid simulation provides the means to evaluate the aerodynamic performance of a given shape, its computational cost hinders its usage during the early explorative phases of design, when aesthetic is decided upon. We present an interactive system to assist designers in creating aerodynamic car profiles. Our system relies on a neural surrogate model to predict fluid flow around car shapes, providing fluid visualization and shape optimization feedback to designers as soon as they sketch a car profile. Compared to prior work that focused on time-averaged fluid flows, we describe how to train our model on instantaneous, synchronized observations extracted from multiple pre-computed simulations, such that we can visualize and optimize for dynamic flow features, such as vortices. Furthermore, we architectured our model to support gradient-based shape optimization within a learned latent space of car profiles. In addition to regularizing the optimization process, this latent space and an associated encoder-decoder allows us to input and output car profiles in a bitmap form, without any explicit parameterization of the car boundary. Finally, we designed our model to support pointwise queries of fluid properties around car shapes, allowing us to adapt computational cost to application needs. As an illustration, we only query our model along streamlines for flow visualization, we query it in the vicinity of the car for drag optimization, and we query it behind the car for vortex attenuation.