41-Issue 7

Permanent URI for this collection

Pacific Graphics 2022 - Symposium Proceedings
Kyoto, Japan | October 5 – 8, 2022

(for Short Papers, Posters, and Work-in-Progress Papers see PG 2022 - Short Papers, Posters, and Work-in-Progress Papers)
Curves and Meshes
Out-of-core Extraction of Curve Skeletons for Large Volumetric Models
Yiyao Chu and Wencheng Wang
Point-augmented Bi-cubic Subdivision Surfaces
Kestutis Karciauskas and Jorg Peters
SIGDT: 2D Curve Reconstruction
Diana Marin, Stefan Ohrhallinger, and Michael Wimmer
MeshFormer: High-resolution Mesh Segmentation with Graph Transformer
Yuan Li, Xiangyang He, Yankai Jiang, Huan Liu, Yubo Tao, and Lin Hai
WTFM Layer: An Effective Map Extractor for Unsupervised Shape Correspondence
Shengjun Liu, Haojun Xu, Dong-Ming Yan, Ling Hu, Xinru Liu, and Qinsong Li
Point Cloud Processing and Dataset Generation
MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis
Haocheng Ren, Hao Zhang, Jia Zheng, Jiaxiang Zheng, Rui Tang, Yuchi Huo, Hujun Bao, and Rui Wang
Exploring Contextual Relationships in 3D Cloud Points by Semantic Knowledge Mining
Lianggangxu Chen, Jiale Lu, Yiqing Cai, Changbo Wang, and Gaoqi He
UTOPIC: Uncertainty-aware Overlap Prediction Network for Partial Point Cloud Registration
Zhilei Chen, Honghua Chen, Lina Gong, Xuefeng Yan, Jun Wang, Yanwen Guo, Jing Qin, and Mingqiang Wei
Local Offset Point Cloud Transformer Based Implicit Surface Reconstruction
Yan Xin Yang and San Guo Zhang
MODNet: Multi-offset Point Cloud Denoising Network Customized for Multi-scale Patches
Anyi Huang, Qian Xie, Zhoutao Wang, Dening Lu, Mingqiang Wei, and Jun Wang
Point Cloud Generation
Resolution-switchable 3D Semantic Scene Completion
Shoutong Luo, Zhengxing Sun, Yunhan Sun, and Yi Wang
DiffusionPointLabel: Annotated Point Cloud Generation with Diffusion Model
Tingting Li, Yunfei Fu, Xiaoguang Han, Hui Liang, Jian Jun Zhang, and Jian Chang
USTNet: Unsupervised Shape-to-Shape Translation via Disentangled Representations
Haoran Wang, Jiaxin Li, Alexandru Telea, Jirí Kosinka, and Zizhao Wu
SPCNet: Stepwise Point Cloud Completion Network
Fei Hu, Honghua Chen, Xuequan Lu, Zhe Zhu, Jun Wang, Weiming Wang, Fu Lee Wang, and Mingqiang Wei
Video
StylePortraitVideo: Editing Portrait Videos with Expression Optimization
Kwanggyoon Seo, Seoung Wug Oh, Jingwan Lu, Joon-Young Lee, Seonghyeon Kim, and Junyong Noh
Real-Time Video Deblurring via Lightweight Motion Compensation
Hyeongseok Son, Junyong Lee, Sunghyun Cho, and Seungyong Lee
A Drone Video Clip Dataset and its Applications in Automated Cinematography
Amirsaman Ashtari, Raehyuk Jung, Mingxiao Li, and Junyong Noh
Fast Geometric Computation
Occluder Generation for Buildings in Digital Games
Kui Wu, Xu He, Zherong Pan, and Xifeng Gao
Efficient Direct Isosurface Rasterization of Scalar Volumes
Adrian Kreskowski, Gareth Rendle, and Bernd Froehlich
Fine-Grained Memory Profiling of GPGPU Kernels
Max von Buelow, Stefan Guthe, and Dieter W. Fellner
Rendering - Sampling
Classifier Guided Temporal Supersampling for Real-time Rendering
Yu-Xiao Guo, Guojun Chen, Yue Dong, and Xin Tong
Specular Manifold Bisection Sampling for Caustics Rendering
Jia-Wun Jhang and Chun-Fa Chang
Multirate Shading with Piecewise Interpolatory Approximation
Yiwei Hu, Yazhen Yuan, Rui Wang, Zhuo Yang, and Hujun Bao
Rendering - Modeling Nature and Material
Real-time Deep Radiance Reconstruction from Imperfect Caches
Tao Huang, Yadong Song, Jie Guo, Chengzhi Tao, Zijing Zong, Xihao Fu, Hongshan Li, and Yanwen Guo
Real-Time Rendering of Eclipses without Incorporation of Atmospheric Effects
Simon Schneegans, Jonas Gilg, Volker Ahlers, and Andreas Gerndt
A Wide Spectral Range Sky Radiance Model
Petr Vévoda, Tom Bashford-Rogers, Monika Kolářová, and Alexander Wilkie
Targeting Shape and Material in Lighting Design
Baran Usta, Sylvia Pont, and Elmar Eisemann
Image Enhancement
Ref-ZSSR: Zero-Shot Single Image Superresolution with Reference Image
Xianjun Han, Xue Wang, Huabin Wang, Xuejun Li, and Hongyu Yang
Learning Multi-Scale Deep Image Prior for High-Quality Unsupervised Image Denoising
Hao Jiang, Qing Zhang, Yongwei Nie, Lei Zhu, and Wei-Shi Zheng
Contrastive Semantic-Guided Image Smoothing Network
Jie Wang, Yongzhen Wang, Yidan Feng, Lina Gong, Xuefeng Yan, Haoran Xie, Fu Lee Wang, and Mingqiang Wei
Image Detection and Understanding
Effective Eyebrow Matting with Domain Adaptation
Luyuan Wang, Hanyuan Zhang, Qinjie Xiao, Hao Xu, Chunhua Shen, and Xiaogang Jin
Fine-Grained Scene Graph Generation with Overlap Region and Geometrical Center
Yong Qiang Zhao, Zhi Jin, Hai Yan Zhao, Feng Zhang, Zheng Wei Tao, Cheng Feng Dou, Xin Hai Xu, and Dong Hong Liu
SO(3)-Pose: SO(3)-Equivariance Learning for 6D Object Pose Estimation
Haoran Pan, Jun Zhou, Yuanpeng Liu, Xuequan Lu, Weiming Wang, Xuefeng Yan, and Mingqiang Wei
Joint Hand and Object Pose Estimation from a Single RGB Image using High-level 2D Constraints
Hao-Xuan Song, Tai-Jiang Mu, and Ralph R. Martin
Image Synthesis
User-Controllable Latent Transformer for StyleGAN Image Layout Editing
Yuki Endo
EL-GAN: Edge-Enhanced Generative Adversarial Network for Layout-to-Image Generation
Lin Gao, Lei Wu, and Xiangxu Meng
Abstract Painting Synthesis via Decremental optimization
Ming Yan, Yuanyuan Pu, Pengzheng Zhao, Dan Xu, Hao Wu, Qiuxia Yang, and Ruxin Wang
Generative Deformable Radiance Fields for Disentangled Image Synthesis of Topology-Varying Objects
Ziyu Wang, Yu Deng, Jiaolong Yang, Jingyi Yu, and Xin Tong
Image Restoration
Semi-MoreGAN: Semi-supervised Generative Adversarial Network for Mixture of Rain Removal
Yiyang Shen, Yongzhen Wang, Mingqiang Wei, Honghua Chen, Haoran Xie, Gary Cheng, and Fu Lee Wang
Depth-Aware Shadow Removal
Yanping Fu, Zhenyu Gai, Haifeng Zhao, Shaojie Zhang, Ying Shan, Yang Wu, and Jin Tang
TogetherNet: Bridging Image Restoration and Object Detection Together via Dynamic Enhancement Learning
Yongzhen Wang, Xuefeng Yan, Kaiwen Zhang, Lina Gong, Haoran Xie, Fu Lee Wang, and Mingqiang Wei
Stylization and Texture
Color-mapped Noise Vector Fields for Generating Procedural Micro-patterns
Charline Grenier, Basile Sauvage, Jean-Michel Dischler, and Sylvain Thery
Pixel Art Adaptation for Handicraft Fabrication
Yuki Igarashi and Takeo Igarashi
Shape-Guided Mixed Metro Map Layout
Tobias Batik, Soeren Terziadis, Yu-Shuen Wang, Martin Nöllenburg, and Hsiang-Yun Wu
Efficient Texture Parameterization Driven by Perceptual-Loss-on-Screen
Haoran Sun, Shiyi Wang, Wenhai Wu, Yao Jin, Hujun Bao, and Jin Huang
MoMaS: Mold Manifold Simulation for Real-time Procedural Texturing
Filippo Maggioli, Riccardo Marin, Simone Melzi, and Emanuele Rodolà
Physics Simulation and Optimization
Large-Scale Worst-Case Topology Optimization
Di Zhang, Xiaoya Zhai, Xiao-Ming Fu, Heming Wang, and Ligang Liu
Spatio-temporal Keyframe Control of Traffic Simulation using Coarse-to-Fine Optimization
Yi Han, He Wang, and Xiaogang Jin
NSTO: Neural Synthesizing Topology Optimization for Modulated Structure Generation
Shengze Zhong, Parinya Punpongsanon, Daisuke Iwai, and Kosuke Sato
Efficient and Stable Simulation of Inextensible Cosserat Rods by a Compact Representation
Chongyao Zhao, Jinkeng Lin, Tianyu Wang, Hujun Bao, and Jin Huang
Perception and Visualization
Learning 3D Shape Aesthetics Globally and Locally
Minchan Chen and Manfred Lau
Eye-Tracking-Based Prediction of User Experience in VR Locomotion Using Machine Learning
Hong Gao and Enkelejda Kasneci
Digital Human
Implicit Neural Deformation for Sparse-View Face Reconstruction
Moran Li, Haibin Huang, Yi Zheng, Mengtian Li, Nong Sang, and Chongyang Ma
Learning Dynamic 3D Geometry and Texture for Video Face Swapping
Christopher Otto, Jacek Naruniec, Leonhard Helminger, Thomas Etterlin, Graziana Mignone, Prashanth Chandran, Gaspard Zoss, Christopher Schroers, Markus Gross, Paulo Gotardo, Derek Bradley, and Romann Weber
BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction
Xingchao Yang and Takafumi Taketomi
ShadowPatch: Shadow Based Segmentation for Reliable Depth Discontinuities in Photometric Stereo
Moritz Heep and Eduard Zell

BibTeX (41-Issue 7)
                
@article{
10.1111:cgf.14652,
journal = {Computer Graphics Forum}, title = {{
Out-of-core Extraction of Curve Skeletons for Large Volumetric Models}},
author = {
Chu, Yiyao
and
Wang, Wencheng
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14652}
}
                
@article{
10.1111:cgf.14708,
journal = {Computer Graphics Forum}, title = {{
Pacific Graphics 2022 - CGF 41-7: Frontmatter}},
author = {
Umetani, Nobuyuki
and
Wojtan, Chris
and
Vouga, Etienne
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14708}
}
                
@article{
10.1111:cgf.14653,
journal = {Computer Graphics Forum}, title = {{
Point-augmented Bi-cubic Subdivision Surfaces}},
author = {
Karciauskas, Kestutis
and
Peters, Jorg
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14653}
}
                
@article{
10.1111:cgf.14654,
journal = {Computer Graphics Forum}, title = {{
SIGDT: 2D Curve Reconstruction}},
author = {
Marin, Diana
and
Ohrhallinger, Stefan
and
Wimmer, Michael
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14654}
}
                
@article{
10.1111:cgf.14655,
journal = {Computer Graphics Forum}, title = {{
MeshFormer: High-resolution Mesh Segmentation with Graph Transformer}},
author = {
Li, Yuan
and
He, Xiangyang
and
Jiang, Yankai
and
Liu, Huan
and
Tao, Yubo
and
Hai, Lin
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14655}
}
                
@article{
10.1111:cgf.14656,
journal = {Computer Graphics Forum}, title = {{
WTFM Layer: An Effective Map Extractor for Unsupervised Shape Correspondence}},
author = {
Liu, Shengjun
and
Xu, Haojun
and
Yan, Dong-Ming
and
Hu, Ling
and
Liu, Xinru
and
Li, Qinsong
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14656}
}
                
@article{
10.1111:cgf.14657,
journal = {Computer Graphics Forum}, title = {{
MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis}},
author = {
Ren, Haocheng
and
Zhang, Hao
and
Zheng, Jia
and
Zheng, Jiaxiang
and
Tang, Rui
and
Huo, Yuchi
and
Bao, Hujun
and
Wang, Rui
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14657}
}
                
@article{
10.1111:cgf.14658,
journal = {Computer Graphics Forum}, title = {{
Exploring Contextual Relationships in 3D Cloud Points by Semantic Knowledge Mining}},
author = {
Chen, Lianggangxu
and
Lu, Jiale
and
Cai, Yiqing
and
Wang, Changbo
and
He, Gaoqi
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14658}
}
                
@article{
10.1111:cgf.14659,
journal = {Computer Graphics Forum}, title = {{
UTOPIC: Uncertainty-aware Overlap Prediction Network for Partial Point Cloud Registration}},
author = {
Chen, Zhilei
and
Chen, Honghua
and
Gong, Lina
and
Yan, Xuefeng
and
Wang, Jun
and
Guo, Yanwen
and
Qin, Jing
and
Wei, Mingqiang
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14659}
}
                
@article{
10.1111:cgf.14661,
journal = {Computer Graphics Forum}, title = {{
MODNet: Multi-offset Point Cloud Denoising Network Customized for Multi-scale Patches}},
author = {
Huang, Anyi
and
Xie, Qian
and
Wang, Zhoutao
and
Lu, Dening
and
Wei, Mingqiang
and
Wang, Jun
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14661}
}
                
@article{
10.1111:cgf.14660,
journal = {Computer Graphics Forum}, title = {{
Local Offset Point Cloud Transformer Based Implicit Surface Reconstruction}},
author = {
Yang, Yan Xin
and
Zhang, San Guo
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14660}
}
                
@article{
10.1111:cgf.14662,
journal = {Computer Graphics Forum}, title = {{
Resolution-switchable 3D Semantic Scene Completion}},
author = {
Luo, Shoutong
and
Sun, Zhengxing
and
Sun, Yunhan
and
Wang, Yi
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14662}
}
                
@article{
10.1111:cgf.14663,
journal = {Computer Graphics Forum}, title = {{
DiffusionPointLabel: Annotated Point Cloud Generation with Diffusion Model}},
author = {
Li, Tingting
and
Fu, Yunfei
and
Han, Xiaoguang
and
Liang, Hui
and
Zhang, Jian Jun
and
Chang, Jian
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14663}
}
                
@article{
10.1111:cgf.14664,
journal = {Computer Graphics Forum}, title = {{
USTNet: Unsupervised Shape-to-Shape Translation via Disentangled Representations}},
author = {
Wang, Haoran
and
Li, Jiaxin
and
Telea, Alexandru
and
Kosinka, Jirí
and
Wu, Zizhao
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14664}
}
                
@article{
10.1111:cgf.14665,
journal = {Computer Graphics Forum}, title = {{
SPCNet: Stepwise Point Cloud Completion Network}},
author = {
Hu, Fei
and
Chen, Honghua
and
Lu, Xuequan
and
Zhu, Zhe
and
Wang, Jun
and
Wang, Weiming
and
Wang, Fu Lee
and
Wei, Mingqiang
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14665}
}
                
@article{
10.1111:cgf.14666,
journal = {Computer Graphics Forum}, title = {{
StylePortraitVideo: Editing Portrait Videos with Expression Optimization}},
author = {
Seo, Kwanggyoon
and
Oh, Seoung Wug
and
Lu, Jingwan
and
Lee, Joon-Young
and
Kim, Seonghyeon
and
Noh, Junyong
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14666}
}
                
@article{
10.1111:cgf.14667,
journal = {Computer Graphics Forum}, title = {{
Real-Time Video Deblurring via Lightweight Motion Compensation}},
author = {
Son, Hyeongseok
and
Lee, Junyong
and
Cho, Sunghyun
and
Lee, Seungyong
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14667}
}
                
@article{
10.1111:cgf.14668,
journal = {Computer Graphics Forum}, title = {{
A Drone Video Clip Dataset and its Applications in Automated Cinematography}},
author = {
Ashtari, Amirsaman
and
Jung, Raehyuk
and
Li, Mingxiao
and
Noh, Junyong
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14668}
}
                
@article{
10.1111:cgf.14669,
journal = {Computer Graphics Forum}, title = {{
Occluder Generation for Buildings in Digital Games}},
author = {
Wu, Kui
and
He, Xu
and
Pan, Zherong
and
Gao, Xifeng
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14669}
}
                
@article{
10.1111:cgf.14671,
journal = {Computer Graphics Forum}, title = {{
Fine-Grained Memory Profiling of GPGPU Kernels}},
author = {
Buelow, Max von
and
Guthe, Stefan
and
Fellner, Dieter W.
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14671}
}
                
@article{
10.1111:cgf.14670,
journal = {Computer Graphics Forum}, title = {{
Efficient Direct Isosurface Rasterization of Scalar Volumes}},
author = {
Kreskowski, Adrian
and
Rendle, Gareth
and
Froehlich, Bernd
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14670}
}
                
@article{
10.1111:cgf.14672,
journal = {Computer Graphics Forum}, title = {{
Classifier Guided Temporal Supersampling for Real-time Rendering}},
author = {
Guo, Yu-Xiao
and
Chen, Guojun
and
Dong, Yue
and
Tong, Xin
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14672}
}
                
@article{
10.1111:cgf.14673,
journal = {Computer Graphics Forum}, title = {{
Specular Manifold Bisection Sampling for Caustics Rendering}},
author = {
Jhang, Jia-Wun
and
Chang, Chun-Fa
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14673}
}
                
@article{
10.1111:cgf.14674,
journal = {Computer Graphics Forum}, title = {{
Multirate Shading with Piecewise Interpolatory Approximation}},
author = {
Hu, Yiwei
and
Yuan, Yazhen
and
Wang, Rui
and
Yang, Zhuo
and
Bao, Hujun
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14674}
}
                
@article{
10.1111:cgf.14675,
journal = {Computer Graphics Forum}, title = {{
Real-time Deep Radiance Reconstruction from Imperfect Caches}},
author = {
Huang, Tao
and
Song, Yadong
and
Guo, Jie
and
Tao, Chengzhi
and
Zong, Zijing
and
Fu, Xihao
and
Li, Hongshan
and
Guo, Yanwen
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14675}
}
                
@article{
10.1111:cgf.14676,
journal = {Computer Graphics Forum}, title = {{
Real-Time Rendering of Eclipses without Incorporation of Atmospheric Effects}},
author = {
Schneegans, Simon
and
Gilg, Jonas
and
Ahlers, Volker
and
Gerndt, Andreas
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14676}
}
                
@article{
10.1111:cgf.14677,
journal = {Computer Graphics Forum}, title = {{
A Wide Spectral Range Sky Radiance Model}},
author = {
Vévoda, Petr
and
Bashford-Rogers, Tom
and
Kolářová, Monika
and
Wilkie, Alexander
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14677}
}
                
@article{
10.1111:cgf.14678,
journal = {Computer Graphics Forum}, title = {{
Targeting Shape and Material in Lighting Design}},
author = {
Usta, Baran
and
Pont, Sylvia
and
Eisemann, Elmar
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14678}
}
                
@article{
10.1111:cgf.14679,
journal = {Computer Graphics Forum}, title = {{
Ref-ZSSR: Zero-Shot Single Image Superresolution with Reference Image}},
author = {
Han, Xianjun
and
Wang, Xue
and
Wang, Huabin
and
Li, Xuejun
and
Yang, Hongyu
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14679}
}
                
@article{
10.1111:cgf.14681,
journal = {Computer Graphics Forum}, title = {{
Contrastive Semantic-Guided Image Smoothing Network}},
author = {
Wang, Jie
and
Wang, Yongzhen
and
Feng, Yidan
and
Gong, Lina
and
Yan, Xuefeng
and
Xie, Haoran
and
Wang, Fu Lee
and
Wei, Mingqiang
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14681}
}
                
@article{
10.1111:cgf.14680,
journal = {Computer Graphics Forum}, title = {{
Learning Multi-Scale Deep Image Prior for High-Quality Unsupervised Image Denoising}},
author = {
Jiang, Hao
and
Zhang, Qing
and
Nie, Yongwei
and
Zhu, Lei
and
Zheng, Wei-Shi
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14680}
}
                
@article{
10.1111:cgf.14682,
journal = {Computer Graphics Forum}, title = {{
Effective Eyebrow Matting with Domain Adaptation}},
author = {
Wang, Luyuan
and
Zhang, Hanyuan
and
Xiao, Qinjie
and
Xu, Hao
and
Shen, Chunhua
and
Jin, Xiaogang
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14682}
}
                
@article{
10.1111:cgf.14683,
journal = {Computer Graphics Forum}, title = {{
Fine-Grained Scene Graph Generation with Overlap Region and Geometrical Center}},
author = {
Zhao, Yong Qiang
and
Jin, Zhi
and
Zhao, Hai Yan
and
Zhang, Feng
and
Tao, Zheng Wei
and
Dou, Cheng Feng
and
Xu, Xin Hai
and
Liu, Dong Hong
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14683}
}
                
@article{
10.1111:cgf.14684,
journal = {Computer Graphics Forum}, title = {{
SO(3)-Pose: SO(3)-Equivariance Learning for 6D Object Pose Estimation}},
author = {
Pan, Haoran
and
Zhou, Jun
and
Liu, Yuanpeng
and
Lu, Xuequan
and
Wang, Weiming
and
Yan, Xuefeng
and
Wei, Mingqiang
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14684}
}
                
@article{
10.1111:cgf.14685,
journal = {Computer Graphics Forum}, title = {{
Joint Hand and Object Pose Estimation from a Single RGB Image using High-level 2D Constraints}},
author = {
Song, Hao-Xuan
and
Mu, Tai-Jiang
and
Martin, Ralph R.
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14685}
}
                
@article{
10.1111:cgf.14686,
journal = {Computer Graphics Forum}, title = {{
User-Controllable Latent Transformer for StyleGAN Image Layout Editing}},
author = {
Endo, Yuki
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14686}
}
                
@article{
10.1111:cgf.14687,
journal = {Computer Graphics Forum}, title = {{
EL-GAN: Edge-Enhanced Generative Adversarial Network for Layout-to-Image Generation}},
author = {
Gao, Lin
and
Wu, Lei
and
Meng, Xiangxu
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14687}
}
                
@article{
10.1111:cgf.14688,
journal = {Computer Graphics Forum}, title = {{
Abstract Painting Synthesis via Decremental optimization}},
author = {
Yan, Ming
and
Pu, Yuanyuan
and
Zhao, Pengzheng
and
Xu, Dan
and
Wu, Hao
and
Yang, Qiuxia
and
Wang, Ruxin
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14688}
}
                
@article{
10.1111:cgf.14690,
journal = {Computer Graphics Forum}, title = {{
Semi-MoreGAN: Semi-supervised Generative Adversarial Network for Mixture of Rain Removal}},
author = {
Shen, Yiyang
and
Wang, Yongzhen
and
Wei, Mingqiang
and
Chen, Honghua
and
Xie, Haoran
and
Cheng, Gary
and
Wang, Fu Lee
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14690}
}
                
@article{
10.1111:cgf.14689,
journal = {Computer Graphics Forum}, title = {{
Generative Deformable Radiance Fields for Disentangled Image Synthesis of Topology-Varying Objects}},
author = {
Wang, Ziyu
and
Deng, Yu
and
Yang, Jiaolong
and
Yu, Jingyi
and
Tong, Xin
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14689}
}
                
@article{
10.1111:cgf.14691,
journal = {Computer Graphics Forum}, title = {{
Depth-Aware Shadow Removal}},
author = {
Fu, Yanping
and
Gai, Zhenyu
and
Zhao, Haifeng
and
Zhang, Shaojie
and
Shan, Ying
and
Wu, Yang
and
Tang, Jin
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14691}
}
                
@article{
10.1111:cgf.14692,
journal = {Computer Graphics Forum}, title = {{
TogetherNet: Bridging Image Restoration and Object Detection Together via Dynamic Enhancement Learning}},
author = {
Wang, Yongzhen
and
Yan, Xuefeng
and
Zhang, Kaiwen
and
Gong, Lina
and
Xie, Haoran
and
Wang, Fu Lee
and
Wei, Mingqiang
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14692}
}
                
@article{
10.1111:cgf.14693,
journal = {Computer Graphics Forum}, title = {{
Color-mapped Noise Vector Fields for Generating Procedural Micro-patterns}},
author = {
Grenier, Charline
and
Sauvage, Basile
and
Dischler, Jean-Michel
and
Thery, Sylvain
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14693}
}
                
@article{
10.1111:cgf.14696,
journal = {Computer Graphics Forum}, title = {{
Efficient Texture Parameterization Driven by Perceptual-Loss-on-Screen}},
author = {
Sun, Haoran
and
Wang, Shiyi
and
Wu, Wenhai
and
Jin, Yao
and
Bao, Hujun
and
Huang, Jin
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14696}
}
                
@article{
10.1111:cgf.14694,
journal = {Computer Graphics Forum}, title = {{
Pixel Art Adaptation for Handicraft Fabrication}},
author = {
Igarashi, Yuki
and
Igarashi, Takeo
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14694}
}
                
@article{
10.1111:cgf.14695,
journal = {Computer Graphics Forum}, title = {{
Shape-Guided Mixed Metro Map Layout}},
author = {
Batik, Tobias
and
Terziadis, Soeren
and
Wang, Yu-Shuen
and
Nöllenburg, Martin
and
Wu, Hsiang-Yun
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14695}
}
                
@article{
10.1111:cgf.14697,
journal = {Computer Graphics Forum}, title = {{
MoMaS: Mold Manifold Simulation for Real-time Procedural Texturing}},
author = {
Maggioli, Filippo
and
Marin, Riccardo
and
Melzi, Simone
and
Rodolà, Emanuele
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14697}
}
                
@article{
10.1111:cgf.14698,
journal = {Computer Graphics Forum}, title = {{
Large-Scale Worst-Case Topology Optimization}},
author = {
Zhang, Di
and
Zhai, Xiaoya
and
Fu, Xiao-Ming
and
Wang, Heming
and
Liu, Ligang
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14698}
}
                
@article{
10.1111:cgf.14699,
journal = {Computer Graphics Forum}, title = {{
Spatio-temporal Keyframe Control of Traffic Simulation using Coarse-to-Fine Optimization}},
author = {
Han, Yi
and
Wang, He
and
Jin, Xiaogang
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14699}
}
                
@article{
10.1111:cgf.14700,
journal = {Computer Graphics Forum}, title = {{
NSTO: Neural Synthesizing Topology Optimization for Modulated Structure Generation}},
author = {
Zhong, Shengze
and
Punpongsanon, Parinya
and
Iwai, Daisuke
and
Sato, Kosuke
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14700}
}
                
@article{
10.1111:cgf.14701,
journal = {Computer Graphics Forum}, title = {{
Efficient and Stable Simulation of Inextensible Cosserat Rods by a Compact Representation}},
author = {
Zhao, Chongyao
and
Lin, Jinkeng
and
Wang, Tianyu
and
Bao, Hujun
and
Huang, Jin
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14701}
}
                
@article{
10.1111:cgf.14702,
journal = {Computer Graphics Forum}, title = {{
Learning 3D Shape Aesthetics Globally and Locally}},
author = {
Chen, Minchan
and
Lau, Manfred
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14702}
}
                
@article{
10.1111:cgf.14703,
journal = {Computer Graphics Forum}, title = {{
Eye-Tracking-Based Prediction of User Experience in VR Locomotion Using Machine Learning}},
author = {
Gao, Hong
and
Kasneci, Enkelejda
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14703}
}
                
@article{
10.1111:cgf.14704,
journal = {Computer Graphics Forum}, title = {{
Implicit Neural Deformation for Sparse-View Face Reconstruction}},
author = {
Li, Moran
and
Huang, Haibin
and
Zheng, Yi
and
Li, Mengtian
and
Sang, Nong
and
Ma, Chongyang
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14704}
}
                
@article{
10.1111:cgf.14705,
journal = {Computer Graphics Forum}, title = {{
Learning Dynamic 3D Geometry and Texture for Video Face Swapping}},
author = {
Otto, Christopher
and
Naruniec, Jacek
and
Bradley, Derek
and
Weber, Romann
and
Helminger, Leonhard
and
Etterlin, Thomas
and
Mignone, Graziana
and
Chandran, Prashanth
and
Zoss, Gaspard
and
Schroers, Christopher
and
Gross, Markus
and
Gotardo, Paulo
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14705}
}
                
@article{
10.1111:cgf.14706,
journal = {Computer Graphics Forum}, title = {{
BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction}},
author = {
Yang, Xingchao
and
Taketomi, Takafumi
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14706}
}
                
@article{
10.1111:cgf.14707,
journal = {Computer Graphics Forum}, title = {{
ShadowPatch: Shadow Based Segmentation for Reliable Depth Discontinuities in Photometric Stereo}},
author = {
Heep, Moritz
and
Zell, Eduard
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14707}
}

Browse

Recent Submissions

Now showing 1 - 57 of 57
  • Item
    Out-of-core Extraction of Curve Skeletons for Large Volumetric Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chu, Yiyao; Wang, Wencheng; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Existing methods for skeleton extraction have limitations in terms of the amount of memory space available, as the model must be allocated to the random access memory. This challenges the treatment of out-of-core models. Although applying out-of-core simplification methods to the model can fit in memory, this would induce distortion of the model surface, and so causing the skeleton to be off-centered or changing the topological structure. In this paper, we propose an efficient out-of-core method for extracting skeletons from large volumetric models. The method takes a volumetric model as input and first computes an out-of-core distance transform. With the distance transform, we generate a medial mesh to capture the prominent features for skeleton extraction, which significantly reduces the data size and facilitates the process of large models. At last, we contract the medial mesh in an out-of-core fashion to generate the skeleton. Experimental results show that our method can efficiently extract high-quality curve skeletons from large volumetric models with small memory usage.
  • Item
    Pacific Graphics 2022 - CGF 41-7: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
  • Item
    Point-augmented Bi-cubic Subdivision Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Karciauskas, Kestutis; Peters, Jorg; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Point-Augmented Subdivision (PAS) replaces complex geometry-dependent guided subdivision, known to yield high-quality surfaces, by explicit subdivision formulas that yield similarly-good limit surfaces and are easy to implement using any subdivision infrastructure: map the control net d augmented by a fixed central limit point C, to a finer net (˜d;C) = M(d;C), where the subdivision matrix M is assembled from the provided stencil Tables. Point-augmented bi-cubic subdivision improves the state of the art so that bi-cubic subdivision surfaces can be used in high-end geometric design: the highlight line distribution for challenging configurations lacks the shape artifacts usually associated with explicit iterative generalized subdivision operators near extraordinary points. Five explicit formulas define Point-augmented bi-cubic subdivision in addition to uniform B-spline knot insertion. Point-augmented bi-cubic subdivision comes in two flavors, either generating a sequence of C2-joined surface rings (PAS2) or C1-joined rings (PAS1) that have fewer pieces.
  • Item
    SIGDT: 2D Curve Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Marin, Diana; Ohrhallinger, Stefan; Wimmer, Michael; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Determining connectivity between points and reconstructing their shape boundaries are long-standing problems in computer graphics. One possible approach to solve these problems is to use a proximity graph. We propose a new proximity graph computed by intersecting the to-date rarely used proximity-based graph called spheres-of-influence graph (SIG) with the Delaunay triangulation (DT). We prove that the resulting graph, which we name SIGDT, contains the piece-wise linear reconstruction for a set of unstructured points in the plane for a sampling condition superseding current bounds and capturing well practical point sets' properties. As an application, we apply a dual of boundary adjustment steps from the CONNECT2D algorithm to remove the redundant edges. We show that the resulting algorithm SIG-CONNECT2D yields the best reconstruction accuracy compared to state-of-the-art algorithms from a recent comprehensive benchmark, and the method offers the potential for further improvements, e.g., for surface reconstruction.
  • Item
    MeshFormer: High-resolution Mesh Segmentation with Graph Transformer
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Li, Yuan; He, Xiangyang; Jiang, Yankai; Liu, Huan; Tao, Yubo; Hai, Lin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Graph transformer has achieved remarkable success in graph-based segmentation tasks. Inspired by this success, we propose a novel method named MeshFormer for applying the graph transformer to the semantic segmentation of high-resolution meshes. The main challenges are the large data size, the massive model size, and the insufficient extraction of high-resolution semantic meanings. The large data or model size necessitates unacceptably extensive computational resources, and the insufficient semantic meanings lead to inaccurate segmentation results. MeshFormer addresses these three challenges with three components. First, a boundary-preserving simplification is introduced to reduce the data size while maintaining the critical high-resolution information in segmentation boundaries. Second, a Ricci flow-based clustering algorithm is presented for constructing hierarchical structures of meshes, replacing many convolutions layers for global support with only a few convolutions in hierarchy structures. In this way, the model size can be reduced to an acceptable range. Third, we design a graph transformer with cross-resolution convolutions, which extracts richer high-resolution semantic meanings and improves segmentation results over previous methods. Experiments show that MeshFormer achieves gains from 1.0% to 5.8% on artificial and real-world datasets.
  • Item
    WTFM Layer: An Effective Map Extractor for Unsupervised Shape Correspondence
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Liu, Shengjun; Xu, Haojun; Yan, Dong-Ming; Hu, Ling; Liu, Xinru; Li, Qinsong; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    We propose a novel unsupervised learning approach for computing correspondences between non-rigid 3D shapes. The core idea is that we integrate a novel structural constraint into the deep functional map pipeline, a recently dominant learning framework for shape correspondence, via a powerful spectral manifold wavelet transform (SMWT). As SMWT is isometrically invariant operator and can analyze features from multiple frequency bands, we use the multiscale SMWT results of the learned features as function preservation constraints to optimize the functional map by assuming each frequency-band information of the descriptors should be correspondingly preserved by the functional map. Such a strategy allows extracting significantly more deep feature information than existing approaches which only use the learned descriptors to estimate the functional map. And our formula strongly ensure the isometric properties of the underlying map. We also prove that our computation of the functional map amounts to filtering processes only referring to matrix multiplication. Then, we leverage the alignment errors of intrinsic embedding between shapes as a loss function and solve it in an unsupervised way using the Sinkhorn algorithm. Finally, we utilize DiffusionNet as a feature extractor to ensure that discretization-resistant and directional shape features are produced. Experiments on multiple challenging datasets prove that our method can achieve state-of-the-art correspondence quality. Furthermore, our method yields significant improvements in robustness to shape discretization and generalization across the different datasets. The source code and trained models will be available at https://github.com/HJ-Xu/ WTFM-Layer.
  • Item
    MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Ren, Haocheng; Zhang, Hao; Zheng, Jia; Zheng, Jiaxiang; Tang, Rui; Huo, Yuchi; Bao, Hujun; Wang, Rui; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    With the rapid development of data-driven techniques, data has played an essential role in various computer vision tasks. Many realistic and synthetic datasets have been proposed to address different problems. However, there are lots of unresolved challenges: (1) the creation of dataset is usually a tedious process with manual annotations, (2) most datasets are only designed for a single specific task, (3) the modification or randomization of the 3D scene is difficult, and (4) the release of commercial 3D data may encounter copyright issue. This paper presents MINERVAS, a Massive INterior EnviRonments VirtuAl Synthesis system, to facilitate the 3D scene modification and the 2D image synthesis for various vision tasks. In particular, we design a programmable pipeline with Domain-Specific Language, allowing users to select scenes from the commercial indoor scene database, synthesize scenes for different tasks with customized rules, and render various types of imagery data, such as color images, geometric structures, semantic labels. Our system eases the difficulty of customizing massive scenes for different tasks and relieves users from manipulating fine-grained scene configurations by providing user-controllable randomness using multilevel samplers. Most importantly, it empowers users to access commercial scene databases with millions of indoor scenes and protects the copyright of core data assets, e.g., 3D CAD models. We demonstrate the validity and flexibility of our system by using our synthesized data to improve the performance on different kinds of computer vision tasks. The project page is at https://coohom.github.io/MINERVAS.
  • Item
    Exploring Contextual Relationships in 3D Cloud Points by Semantic Knowledge Mining
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chen, Lianggangxu; Lu, Jiale; Cai, Yiqing; Wang, Changbo; He, Gaoqi; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    3D scene graph generation (SGG) aims to predict the class of objects and predicates simultaneously in one 3D point cloud scene with instance segmentation. Since the underlying semantic of 3D point clouds is spatial information, recent ideas of the 3D SGG task usually face difficulties in understanding global contextual semantic relationships and neglect the intrinsic 3D visual structures. To build the global scope of semantic relationships, we first propose two types of Semantic Clue (SC) from entity level and path level, respectively. SC can be extracted from the training set and modeled as the co-occurrence probability between entities. Then a novel Semantic Clue aware Graph Convolution Network (SC-GCN) is designed to explicitly model each SC of which the message is passed in their specific neighbor pattern. For constructing the interactions between the 3D visual and semantic modalities, a visual-language transformer (VLT) module is proposed to jointly learn the correlation between 3D visual features and class label embeddings. Systematic experiments on the 3D semantic scene graph (3DSSG) dataset show that our full method achieves state-of-the-art performance.
  • Item
    UTOPIC: Uncertainty-aware Overlap Prediction Network for Partial Point Cloud Registration
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chen, Zhilei; Chen, Honghua; Gong, Lina; Yan, Xuefeng; Wang, Jun; Guo, Yanwen; Qin, Jing; Wei, Mingqiang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    High-confidence overlap prediction and accurate correspondences are critical for cutting-edge models to align paired point clouds in a partial-to-partial manner. However, there inherently exists uncertainty between the overlapping and non-overlapping regions, which has always been neglected and significantly affects the registration performance. Beyond the current wisdom, we propose a novel uncertainty-aware overlap prediction network, dubbed UTOPIC, to tackle the ambiguous overlap prediction problem; to our knowledge, this is the first to explicitly introduce overlap uncertainty to point cloud registration. Moreover, we induce the feature extractor to implicitly perceive the shape knowledge through a completion decoder, and present a geometric relation embedding for Transformer to obtain transformation-invariant geometry-aware feature representations.With the merits of more reliable overlap scores and more precise dense correspondences, UTOPIC can achieve stable and accurate registration results, even for the inputs with limited overlapping areas. Extensive quantitative and qualitative experiments on synthetic and real benchmarks demonstrate the superiority of our approach over state-of-the-art methods.
  • Item
    MODNet: Multi-offset Point Cloud Denoising Network Customized for Multi-scale Patches
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Huang, Anyi; Xie, Qian; Wang, Zhoutao; Lu, Dening; Wei, Mingqiang; Wang, Jun; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    The intricacy of 3D surfaces often results cutting-edge point cloud denoising (PCD) models in surface degradation including remnant noise, wrongly-removed geometric details. Although using multi-scale patches to encode the geometry of a point has become the common wisdom in PCD, we find that simple aggregation of extracted multi-scale features can not adaptively utilize the appropriate scale information according to the geometric information around noisy points. It leads to surface degradation, especially for points close to edges and points on complex curved surfaces. We raise an intriguing question - if employing multi-scale geometric perception information to guide the network to utilize multi-scale information, can eliminate the severe surface degradation problem? To answer it, we propose a Multi-offset Denoising Network (MODNet) customized for multi-scale patches. First, we extract the low-level feature of three scales patches by patch feature encoders. Second, a multi-scale perception module is designed to embed multi-scale geometric information for each scale feature and regress multi-scale weights to guide a multi-offset denoising displacement. Third, a multi-offset decoder regresses three scale offsets, which are guided by the multi-scale weights to predict the final displacement by weighting them adaptively. Experiments demonstrate that our method achieves new state-of-the-art performance on both synthetic and real-scanned datasets. Our code is publicly available at https://github.com/hay-001/MODNet.
  • Item
    Local Offset Point Cloud Transformer Based Implicit Surface Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Yang, Yan Xin; Zhang, San Guo; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Implicit neural representations, such as MLP, can well recover the topology of watertight object. However, MLP fails to recover geometric details of watertight object and complicated topology due to dealing with point cloud in a point-wise manner. In this paper, we propose a point cloud transformer called local offset point cloud transformer (LOPCT) as a feature fusion module. Before using MLP to learn the implicit function, the input point cloud is first fed into the local offset transformer, which adaptively learns the dependency of the local point cloud and obtains the enhanced features of each point. The feature-enhanced point cloud is then fed into the MLP to recover the geometric details and sharp features of watertight object and complex topology. Extensive reconstruction experiments of watertight object and complex topology demonstrate that our method achieves comparable or better results than others in terms of recovering sharp features and geometric details. In addition, experiments on watertight objects demonstrate the robustness of our method in terms of average result.
  • Item
    Resolution-switchable 3D Semantic Scene Completion
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Luo, Shoutong; Sun, Zhengxing; Sun, Yunhan; Wang, Yi; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Semantic scene completion (SSC) aims to recover the complete geometric structure as well as the semantic segmentation results from partial observations. Previous works could only perform this task at a fixed resolution. To handle this problem, we propose a new method that can generate results at different resolutions without redesigning and retraining. The basic idea is to decouple the direct connection between resolution and network structure. To achieve this, we convert feature volume generated by SSC encoders into a resolution adaptive feature and decode this feature via point. We also design a resolution-adapted point sampling strategy for testing and a category-based point sampling strategy for training to further handle this problem. The encoder of our method can be replaced by existing SSC encoders. We can achieve better results at other resolutions while maintaining the same accuracy as the original resolution results. Code and data are available at https://github.com/lstcutong/ReS-SSC.
  • Item
    DiffusionPointLabel: Annotated Point Cloud Generation with Diffusion Model
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Li, Tingting; Fu, Yunfei; Han, Xiaoguang; Liang, Hui; Zhang, Jian Jun; Chang, Jian; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Point cloud generation aims to synthesize point clouds that do not exist in supervised dataset. Generating a point cloud with certain semantic labels remains an under-explored problem. This paper proposes a formulation called DiffusionPointLabel, which completes point-label pair generation based on a DDPM generative model (Denoising Diffusion Probabilistic Model). Specifically, we use a point cloud diffusion generative model and aggregate the intermediate features of the generator. On top of this, we propose Feature Interpreter that transforms intermediate features into semantic labels. Furthermore, we employ an uncertainty measure to filter unqualified point-label pairs for a better quality of generated point cloud dataset. Coupling these two designs enables us to automatically generate annotated point clouds, especially when supervised point-labels pairs are scarce. Our method extends the application of point cloud generation models and surpasses state-of-the-art models.
  • Item
    USTNet: Unsupervised Shape-to-Shape Translation via Disentangled Representations
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Wang, Haoran; Li, Jiaxin; Telea, Alexandru; Kosinka, Jirí; Wu, Zizhao; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    We propose USTNet, a novel deep learning approach designed for learning shape-to-shape translation from unpaired domains in an unsupervised manner. The core of our approach lies in disentangled representation learning that factors out the discriminative features of 3D shapes into content and style codes. Given input shapes from multiple domains, USTNet disentangles their representation into style codes that contain distinctive traits across domains and content codes that contain domaininvariant traits. By fusing the style and content codes of the target and source shapes, our method enables us to synthesize new shapes that resemble the target style and retain the content features of source shapes. Based on the shared style space, our method facilitates shape interpolation by manipulating the style attributes from different domains. Furthermore, by extending the basic building blocks of our network from two-class to multi-class classification, we adapt USTNet to tackle multi-domain shape-to-shape translation. Experimental results show that our approach can generate realistic and natural translated shapes and that our method leads to improved quantitative evaluation metric results compared to 3DSNet. Codes are available at https://Haoran226.github.io/USTNet.
  • Item
    SPCNet: Stepwise Point Cloud Completion Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Hu, Fei; Chen, Honghua; Lu, Xuequan; Zhu, Zhe; Wang, Jun; Wang, Weiming; Wang, Fu Lee; Wei, Mingqiang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    How will you repair a physical object with large missings? You may first recover its global yet coarse shape and stepwise increase its local details. We are motivated to imitate the above physical repair procedure to address the point cloud completion task.We propose a novel stepwise point cloud completion network (SPCNet) for various 3D models with large missings. SPCNet has a hierarchical bottom-to-up network architecture. It fulfills shape completion in an iterative manner, which 1) first infers the global feature of the coarse result; 2) then infers the local feature with the aid of global feature; and 3) finally infers the detailed result with the help of local feature and coarse result. Beyond the wisdom of simulating the physical repair, we newly design a cycle loss to enhance the generalization and robustness of SPCNet. Extensive experiments clearly show the superiority of our SPCNet over the state-of-the-art methods on 3D point clouds with large missings. Code is available at https://github.com/1127368546/SPCNet.
  • Item
    StylePortraitVideo: Editing Portrait Videos with Expression Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Seo, Kwanggyoon; Oh, Seoung Wug; Lu, Jingwan; Lee, Joon-Young; Kim, Seonghyeon; Noh, Junyong; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    High-quality portrait image editing has been made easier by recent advances in GANs (e.g., StyleGAN) and GAN inversion methods that project images onto a pre-trained GAN's latent space. However, extending the existing image editing methods, it is hard to edit videos to produce temporally coherent and natural-looking videos. We find challenges in reproducing diverse video frames and preserving the natural motion after editing. In this work, we propose solutions for these challenges. First, we propose a video adaptation method that enables the generator to reconstruct the original input identity, unusual poses, and expressions in the video. Second, we propose an expression dynamics optimization that tweaks the latent codes to maintain the meaningful motion in the original video. Based on these methods, we build a StyleGAN-based high-quality portrait video editing system that can edit videos in the wild in a temporally coherent way at up to 4K resolution.
  • Item
    Real-Time Video Deblurring via Lightweight Motion Compensation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Son, Hyeongseok; Lee, Junyong; Cho, Sunghyun; Lee, Seungyong; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    While motion compensation greatly improves video deblurring quality, separately performing motion compensation and video deblurring demands huge computational overhead. This paper proposes a real-time video deblurring framework consisting of a lightweight multi-task unit that supports both video deblurring and motion compensation in an efficient way. The multi-task unit is specifically designed to handle large portions of the two tasks using a single shared network and consists of a multi-task detail network and simple networks for deblurring and motion compensation. The multi-task unit minimizes the cost of incorporating motion compensation into video deblurring and enables real-time deblurring. Moreover, by stacking multiple multi-task units, our framework provides flexible control between the cost and deblurring quality. We experimentally validate the state-of-theart deblurring quality of our approach, which runs at a much faster speed compared to previous methods and show practical real-time performance (30.99dB@30fps measured on the DVD dataset).
  • Item
    A Drone Video Clip Dataset and its Applications in Automated Cinematography
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Ashtari, Amirsaman; Jung, Raehyuk; Li, Mingxiao; Noh, Junyong; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Drones became popular video capturing tools. Drone videos in the wild are first captured and then edited by humans to contain aesthetically pleasing camera motions and scenes. Therefore, edited drone videos have extremely useful information for cinematography and for applications such as camera path planning to capture aesthetically pleasing shots. To design intelligent camera path planners, learning drone camera motions from these edited videos is essential. However, first, this requires to filter drone clips and extract their camera motions out of these edited videos that commonly contain both drone and non-drone content. Moreover, existing video search engines return the whole edited video as a semantic search result and cannot return only drone clips inside an edited video. To address this problem, we proposed the first approach that can automatically retrieve drone clips from an unlabeled video collection using high-level search queries, such as ''drone clips captured outdoor in daytime from rural places". The retrieved clips also contain camera motions, camera view, and 3D reconstruction of a scene that can help develop intelligent camera path planners. To train our approach, we needed numerous examples of edited drone videos. To this end, we introduced the first large-scale dataset composed of edited drone videos. This dataset is also used for training and validating our drone video filtering algorithm. Both quantitative and qualitative evaluations have confirmed the validity of our method.
  • Item
    Occluder Generation for Buildings in Digital Games
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Wu, Kui; He, Xu; Pan, Zherong; Gao, Xifeng; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Occlusion culling has become a prevalent method in modern game engines. It can significantly reduce the rendering cost by using an approximate coarse mesh (occluder) for culling hidden objects. An ideal occluder should use as few faces as possible to represent the high-resolution input mesh with a high culling accuracy. We address the open problem of automatic occluder generation for 3D building models with complex topology and interior structures. Our method first generates two coarse sets of faces via patch-based and voxel-based mesh simplification techniques. A metric-guided selection algorithm chooses the best subset of faces to form the occluder, achieving a high occlusion rate and accuracy. Over an evaluation of 77 building models, our method compares favorably against state-of-the-arts in terms of occlusion accuracy, occlusion rate, and face number.
  • Item
    Fine-Grained Memory Profiling of GPGPU Kernels
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Buelow, Max von; Guthe, Stefan; Fellner, Dieter W.; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Memory performance is a crucial bottleneck in many GPGPU applications, making optimizations for hardware and software mandatory. While hardware vendors already use highly efficient caching architectures, software engineers usually have to organize their data accordingly in order to efficiently make use of these, requiring deep knowledge of the actual hardware. In this paper we present a novel technique for fine-grained memory profiling that simulates the whole pipeline of memory flow and finally accumulates profiling values in a way that the user retains information about the potential region in the GPU program by showing these values separately for each allocation. Our memory simulator turns out to outperform state-of-theart memory models of NVIDIA architectures by a magnitude of 2.4 for the L1 cache and 1.3 for the L2 cache, in terms of accuracy. Additionally, we find our technique of fine grained memory profiling a useful tool for memory optimizations, which we successfully show in case of ray tracing and machine learning applications.
  • Item
    Efficient Direct Isosurface Rasterization of Scalar Volumes
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Kreskowski, Adrian; Rendle, Gareth; Froehlich, Bernd; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    In this paper we propose a novel and efficient rasterization-based approach for direct rendering of isosurfaces. Our method exploits the capabilities of task and mesh shader pipelines to identify subvolumes containing potentially visible isosurface geometry, and to efficiently extract primitives which are consumed on the fly by the rasterizer. As a result, our approach requires little preprocessing and negligible additional memory. Direct isosurface rasterization is competitive in terms of rendering performance when compared with ray-marching-based approaches, and significantly outperforms them for increasing resolution in most situations. Since our approach is entirely rasterization based, it affords straightforward integration into existing rendering pipelines, while allowing the use of modern graphics hardware features, such as multi-view stereo for efficient rendering of stereoscopic image pairs for geometry-bound applications. Direct isosurface rasterization is suitable for applications where isosurface geometry is highly variable, such as interactive analysis scenarios for static and dynamic data sets that require frequent isovalue adjustment.
  • Item
    Classifier Guided Temporal Supersampling for Real-time Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Guo, Yu-Xiao; Chen, Guojun; Dong, Yue; Tong, Xin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    We present a learning based temporal supersampling algorithm for real-time rendering. Different from existing learning-based approaches that adopt an end-to-end training of a 'black-box' neural network, we design a 'white-box' solution that first classifies the pixels into different categories and then generates the supersampling result based on classification. Our key observation is that the core problem in temporal supersampling for rendering is to distinguish the pixels that consist of occlusion, aliasing, or shading changes. Samples from these pixels exhibit similar temporal radiance change but require different composition strategies to produce the correct supersampling result. Based on this observation, our method first classifies the pixels into several classes. Based on the classification results, our method then blends the current frame with the warped last frame via a learned weight map to get the supersampling results. We design compact neural networks for each step and develop dedicated loss functions for pixels belonging to different classes. Compared to existing learning based methods, our classifier-based supersampling scheme takes less computational and memory cost for real-time supersampling and generates visually compelling temporal supersampling results with fewer flickering artifacts. We evaluate the performance and generality of our method on several rendered game sequences and our method can upsample the rendered frames from 1080P to 2160P in just 13.39ms on a single Nvidia 3090GPU.
  • Item
    Specular Manifold Bisection Sampling for Caustics Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Jhang, Jia-Wun; Chang, Chun-Fa; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    We propose Specular Manifold Bisection Sampling (SMBS), an improved version of Specular Manifold Sampling (SMS) [ZGJ20]. SMBS is inspired by the small and large mutations in Metropolis Light Transport (MLT) [VG97]. While the Jacobian Matrix of the original SMS method performs well in local convergence (the small mutation), it might fail to find a valid manifold path when the ray deviates too much from the light or bounces from a complex surface. Our proposed SMBS method adds a large mutation step to avoid such a problematic convergence to the local minimum. The results show SMBS can find valid manifold paths in fewer iterations and also find more valid manifold paths. In scenes with complex reflective or refractive surfaces, our method achieves nearly twice or more improvement when measured in manifold walk success rate (SR) and root mean square error (RMSE).
  • Item
    Multirate Shading with Piecewise Interpolatory Approximation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Hu, Yiwei; Yuan, Yazhen; Wang, Rui; Yang, Zhuo; Bao, Hujun; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Evaluating shading functions on geometry surfaces dominates the rendering computation. A high-quality but time-consuming estimate is usually achieved with a dense sampling rate for pixels or sub-pixels. In this paper, we leverage sparsely sampled points on vertices of dynamically-generated subdivision surfaces to approximate the ground-truth shading signal by piecewise linear reconstruction. To control the introduced interpolation error at runtime, we analytically derive an L∞ error bound and compute the optimal subdivision surfaces based on a user-specified error threshold. We apply our analysis on multiple shading functions including Lambertian, Blinn-Phong, Microfacet BRDF and also extend it to handle textures, yielding easy-to-compute formulas. To validate our derivation, we design a forward multirate shading algorithm powered by hardware tessellator that moves shading computation at pixels to the vertices of subdivision triangles on the fly. We show our approach significantly reduces the sampling rates on various test cases, reaching a speedup ratio of 134% ∼ 283% compared to dense per-pixel shading in current graphics hardware.
  • Item
    Real-time Deep Radiance Reconstruction from Imperfect Caches
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Huang, Tao; Song, Yadong; Guo, Jie; Tao, Chengzhi; Zong, Zijing; Fu, Xihao; Li, Hongshan; Guo, Yanwen; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Real-time global illumination is a highly desirable yet challenging task in computer graphics. Existing works well solving this problem are mostly based on some kind of precomputed data (caches), while the final results depend significantly on the quality of the caches. In this paper, we propose a learning-based pipeline that can reproduce a wide range of complex light transport phenomena, including high-frequency glossy interreflection, at any viewpoint in real time (> 90 frames per-second), using information from imperfect caches stored at the barycentre of every triangle in a 3D scene. These caches are generated at a precomputation stage by a physically-based offline renderer at a low sampling rate (e.g., 32 samples per-pixel) and a low image resolution (e.g., 64×16). At runtime, a deep radiance reconstruction method based on a dedicated neural network is then involved to reconstruct a high-quality radiance map of full global illumination at any viewpoint from these imperfect caches, without introducing noise and aliasing artifacts. To further improve the reconstruction accuracy, a new feature fusion strategy is designed in the network to better exploit useful contents from cheap G-buffers generated at runtime. The proposed framework ensures high-quality rendering of images for moderate-sized scenes with full global illumination effects, at the cost of reasonable precomputation time. We demonstrate the effectiveness and efficiency of the proposed pipeline by comparing it with alternative strategies, including real-time path tracing and precomputed radiance transfer.
  • Item
    Real-Time Rendering of Eclipses without Incorporation of Atmospheric Effects
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Schneegans, Simon; Gilg, Jonas; Ahlers, Volker; Gerndt, Andreas; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    In this paper, we present a novel approach for real-time rendering of soft eclipse shadows cast by spherical, atmosphereless bodies. While this problem may seem simple at first, it is complicated by several factors. First, the extreme scale differences and huge mutual distances of the involved celestial bodies cause rendering artifacts in practice. Second, the surface of the Sun does not emit light evenly in all directions (an effect which is known as limb darkening). This makes it impossible to model the Sun as a uniform spherical light source. Finally, our intended applications include real-time rendering of solar eclipses in virtual reality, which require very high frame rates. As a solution to these problems, we precompute the amount of shadowing into an eclipse shadow map, which is parametrized so that it is independent of the position and size of the occluder. Hence, a single shadow map can be used for all spherical occluders in the Solar System. We assess the errors introduced by various simplifications and compare multiple approaches in terms of performance and precision. Last but not least, we compare our approaches to the state-of-the-art and to reference images. The implementation has been published under the MIT license.
  • Item
    A Wide Spectral Range Sky Radiance Model
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Vévoda, Petr; Bashford-Rogers, Tom; Kolářová, Monika; Wilkie, Alexander; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Pre-computed models of sky radiance are a tool to rapidly determine incident solar irradiance in applications as diverse as movie VFX, lighting simulation for architecture, experimental biology, and flight simulators. Several such models exist, but most provide data only for the visible range and, in some cases, for the near-UV. But for accurate simulations of photovoltaic plant yield and the thermal properties of buildings, a pre-computed reference sky model which covers the entire spectral range of terrestrial solar irradiance is needed: and this range is considerably larger than what extant models provide. We deliver this, and for a ground-based observer provide the three components of sky dome radiance, atmospheric transmittance, and polarisation. We also discuss the additional aspects that need to be taken into consideration when including the near-infrared in such a model. Additionally, we provide a simple standalone C++ implementation as well as an implementation with a GUI.
  • Item
    Targeting Shape and Material in Lighting Design
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Usta, Baran; Pont, Sylvia; Eisemann, Elmar; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Product lighting design is a laborious and time-consuming task. With product illustrations being increasingly rendered, the lighting challenge transferred to the virtual realm. Our approach targets lighting design in the context of a scene with fixed objects, materials, and camera parameters, illuminated by environmental lighting. Our solution offers control over the depiction of material characteristics and shape details by optimizing the illuminating environment-map. To that end, we introduce a metric that assesses the shape and material cues in terms of the designed appearance. We formalize the process and support steering the outcome using additional design constraints. We illustrate our solution with several challenging examples.
  • Item
    Ref-ZSSR: Zero-Shot Single Image Superresolution with Reference Image
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Han, Xianjun; Wang, Xue; Wang, Huabin; Li, Xuejun; Yang, Hongyu; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Single image superresolution (SISR) has achieved substantial progress based on deep learning. Many SISR methods acquire pairs of low-resolution (LR) images from their corresponding high-resolution (HR) counterparts. Being unsupervised, this kind of method also demands large-scale training data. However, these paired images and a large amount of training data are difficult to obtain. Recently, several internal, learning-based methods have been introduced to address this issue. Although requiring a large quantity of training data pairs is solved, the ability to improve the image resolution is limited if only the information of the LR image itself is applied. Therefore, we further expand this kind of approach by using similar HR reference images as prior knowledge to assist the single input image. In this paper, we proposed zero-shot single image superresolution with a reference image (Ref-ZSSR). First, we use an unconditional generative model to learn the internal distribution of the HR reference image. Second, a dual-path architecture that contains a downsampler and an upsampler is introduced to learn the mapping between the input image and its downscaled image. Finally, we combine the reference image learning module and dual-path architecture module to train a new generative model that can generate a superresolution (SR) image with the details of the HR reference image. Such a design encourages a simple and accurate way to transfer relevant textures from the reference high-definition (HD) image to LR image. Compared with using only the image itself, the HD feature of the reference image improves the SR performance. In the experiment, we show that the proposed method outperforms previous image-specific network and internal learning-based methods.
  • Item
    Contrastive Semantic-Guided Image Smoothing Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Wang, Jie; Wang, Yongzhen; Feng, Yidan; Gong, Lina; Yan, Xuefeng; Xie, Haoran; Wang, Fu Lee; Wei, Mingqiang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Image smoothing is a fundamental low-level vision task that aims to preserve salient structures of an image while removing insignificant details. Deep learning has been explored in image smoothing to deal with the complex entanglement of semantic structures and trivial details. However, current methods neglect two important facts in smoothing: 1) naive pixel-level regression supervised by the limited number of high-quality smoothing ground-truth could lead to domain shift and cause generalization problems towards real-world images; 2) texture appearance is closely related to object semantics, so that image smoothing requires awareness of semantic difference to apply adaptive smoothing strengths. To address these issues, we propose a novel Contrastive Semantic-Guided Image Smoothing Network (CSGIS-Net) that combines both contrastive prior and semantic prior to facilitate robust image smoothing. The supervision signal is augmented by leveraging undesired smoothing effects as negative teachers, and by incorporating segmentation tasks to encourage semantic distinctiveness. To realize the proposed network, we also enrich the original VOC dataset with texture enhancement and smoothing labels, namely VOC-smooth, which first bridges image smoothing and semantic segmentation. Extensive experiments demonstrate that the proposed CSGIS-Net outperforms state-of-the-art algorithms by a large margin. Code and dataset are available at https://github.com/wangjie6866/CSGIS-Net.
  • Item
    Learning Multi-Scale Deep Image Prior for High-Quality Unsupervised Image Denoising
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Jiang, Hao; Zhang, Qing; Nie, Yongwei; Zhu, Lei; Zheng, Wei-Shi; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Recent methods on image denoising have achieved remarkable progress, benefiting mostly from supervised learning on massive noisy/clean image pairs and unsupervised learning on external noisy images. However, due to the domain gap between the training and testing images, these methods typically have limited applicability on unseen images. Although several attempts have been made to avoid the domain gap issue by learning denoising from singe noisy image itself, they are less effective in handling real-world noise because of assuming the noise corruptions are independent and zero mean. In this paper, we go step further beyond prior work by presenting a novel unsupervised image denoising framework trained from single noisy image without making any explicit assumptions on the noise statistics. Our approach is built upon the deep image prior (DIP), which enables diverse image restoration tasks. However, as is, the denoising performance of DIP will significantly deteriorate on nonzero- mean noise and is sensitive to the number of iterations. To overcome this problem, we propose to utilize multi-scale deep image prior by imposing DIP across different image scales under the constraint of a scale consistency. Experiments on synthetic and real datasets demonstrate that our method performs favorably against the state-of-the-art methods for image denoising.
  • Item
    Effective Eyebrow Matting with Domain Adaptation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Wang, Luyuan; Zhang, Hanyuan; Xiao, Qinjie; Xu, Hao; Shen, Chunhua; Jin, Xiaogang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    We present the first synthetic eyebrow matting datasets and a domain adaptation eyebrow matting network for learning domain-robust feature representation using synthetic eyebrow matting data and unlabeled in-the-wild images with adversarial learning. Different from existing matting methods that may suffer from the lack of ground-truth matting datasets, which are typically labor-intensive to annotate or even worse, unable to obtain, we train the matting network in a semi-supervised manner using synthetic matting datasets instead of ground-truth matting data while achieving high-quality results. Specifically, we first generate a large-scale synthetic eyebrow matting dataset by rendering avatars and collect a real-world eyebrow image dataset while maximizing the data diversity as much as possible. Then, we use the synthetic eyebrow dataset to train a multi-task network, which consists of a regression task to estimate the eyebrow alpha mattes and an adversarial task to adapt the learned features from synthetic data to real data. As a result, our method can successfully train an eyebrow matting network using synthetic data without the need to label any real data. Our method can accurately extract eyebrow alpha mattes from in-the-wild images without any additional prior and achieves state-of-the-art eyebrow matting performance. Extensive experiments demonstrate the superior performance of our method with both qualitative and quantitative results.
  • Item
    Fine-Grained Scene Graph Generation with Overlap Region and Geometrical Center
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhao, Yong Qiang; Jin, Zhi; Zhao, Hai Yan; Zhang, Feng; Tao, Zheng Wei; Dou, Cheng Feng; Xu, Xin Hai; Liu, Dong Hong; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Scene graph generation refers to the task of identifying the objects and specifically the relationships between the objects from an image. Existing scene graph generation methods generally use the bounding boxes region features of objects to identify the relationships between objects. However, we feel that the overlap region features of two objects may play an important role in fine-grained relationship identification. In fact, some fine-grained relationships can only be obtained from the overlap region features of two objects. Therefore, we propose the Multi-Branch Feature Combination (MFC) module and Overlap Region Transformer (ORT) module to comprehensively obtain the visual features contained in the overlap regions of two objects. Concretely, the MFC module uses deconvolution and multi-branch dilation convolution to obtain high-pixels and multi-receptive field features in the overlap regions. The ORT module uses the vision transformer to obtain the self-attention of the overlap regions. The joint use of these two modules achieves the mutual complementation of local connectivity properties of convolution and the global connectivity properties of attention. We also design a Geometrical Center Augmented (GCA) module to obtain the relative position information of the geometric centers between two objects, to prevent the problem that only relying on the scale of the overlap region cannot accurately capture the relationship between two objects. Experiments show that our model ORGC (Overlap Region and Geometrical Center), the combination of the MFC module, the ORT module, and the GCA module, can enhance the performance of fine-grained relation identification. On the Visual Genome dataset, our model outperforms the current state-of-the-art model by 4.4% on the R@50 evaluation metric, reaching a state-of-the-art result of 33.88.
  • Item
    SO(3)-Pose: SO(3)-Equivariance Learning for 6D Object Pose Estimation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Pan, Haoran; Zhou, Jun; Liu, Yuanpeng; Lu, Xuequan; Wang, Weiming; Yan, Xuefeng; Wei, Mingqiang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    6D pose estimation of rigid objects from RGB-D images is crucial for object grasping and manipulation in robotics. Although RGB channels and the depth (D) channel are often complementary, providing respectively the appearance and geometry information, it is still non-trivial on how to fully benefit from the two cross-modal data. From the simple yet new observation, when an object rotates, its semantic label is invariant to the pose while its keypoint offset direction is variant to the pose. To this end, we present SO(3)-Pose, a new representation learning network to explore SO(3)-equivariant and SO(3)-invariant features from the depth channel for pose estimation. The SO(3)-invariant features facilitate to learn more distinctive representations for segmenting objects with similar appearance from RGB channels. The SO(3)-equivariant features communicate with RGB features to deduce the (missed) geometry for detecting keypoints of an object with the reflective surface from the depth channel. Unlike most of existing pose estimation methods, our SO(3)-Pose not only implements the information communication between the RGB and depth channels, but also naturally absorbs the SO(3)-equivariance geometry knowledge from depth images, leading to better appearance and geometry representation learning. Comprehensive experiments show that our method achieves the stateof- the-art performance on three benchmarks. Code is available at https://github.com/phaoran9999/SO3-Pose.
  • Item
    Joint Hand and Object Pose Estimation from a Single RGB Image using High-level 2D Constraints
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Song, Hao-Xuan; Mu, Tai-Jiang; Martin, Ralph R.; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Joint pose estimation of human hands and objects from a single RGB image is an important topic for AR/VR, robot manipulation, etc. It is common practice to determine both poses directly from the image; some recent methods attempt to improve the initial poses using a variety of contact-based approaches. However, few methods take the real physical constraints conveyed by the image into consideration, leading to less realistic results than the initial estimates. To overcome this problem, we make use of a set of high-level 2D features which can be directly extracted from the image in a new pipeline which combines contact approaches and these constraints during optimization. Our pipeline achieves better results than direct regression or contactbased optimization: they are closer to the ground truth and provide high quality contact.
  • Item
    User-Controllable Latent Transformer for StyleGAN Image Layout Editing
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Endo, Yuki; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Latent space exploration is a technique that discovers interpretable latent directions and manipulates latent codes to edit various attributes in images generated by generative adversarial networks (GANs). However, in previous work, spatial control is limited to simple transformations (e.g., translation and rotation), and it is laborious to identify appropriate latent directions and adjust their parameters. In this paper, we tackle the problem of editing the StyleGAN image layout by annotating the image directly. To do so, we propose an interactive framework for manipulating latent codes in accordance with the user inputs. In our framework, the user annotates a StyleGAN image with locations they want to move or not and specifies a movement direction by mouse dragging. From these user inputs and initial latent codes, our latent transformer based on a transformer encoderdecoder architecture estimates the output latent codes, which are fed to the StyleGAN generator to obtain a result image. To train our latent transformer, we utilize synthetic data and pseudo-user inputs generated by off-the-shelf StyleGAN and optical flow models, without manual supervision. Quantitative and qualitative evaluations demonstrate the effectiveness of our method over existing methods.
  • Item
    EL-GAN: Edge-Enhanced Generative Adversarial Network for Layout-to-Image Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Gao, Lin; Wu, Lei; Meng, Xiangxu; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Although some progress has been made in the layout-to-image generation of complex scenes with multiple objects, object-level generation still suffers from distortion and poor recognizability. We argue that this is caused by the lack of feature encodings for edge information during image generation. In order to solve these limitations, we propose a novel edge-enhanced Generative Adversarial Network for layout-to-image generation (termed EL-GAN). The feature encodings of edge information are learned from the multi-level features output by the generator and iteratively optimized along the generator's pipeline. Two new components are included at each generator level to enable multi-scale learning. Specifically, one is the edge generation module (EGM), which is responsible for converting the output of the multi-level features by the generator into images of different scales and extracting their edge maps. The other is the edge fusion module (EFM), which integrates the feature encodings refined from the edge maps into the subsequent image generation process by modulating the parameters in the normalization layers. Meanwhile, the discriminator is fed with frequency-sensitive image features, which greatly enhances the generation quality of the image's high-frequency edge contours and low-frequency regions. Extensive experiments show that EL-GAN outperforms the state-of-the-art methods on the COCO-Stuff and Visual Genome datasets. Our source code is available at https://github.com/Azure616/EL-GAN.
  • Item
    Abstract Painting Synthesis via Decremental optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Yan, Ming; Pu, Yuanyuan; Zhao, Pengzheng; Xu, Dan; Wu, Hao; Yang, Qiuxia; Wang, Ruxin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Existing stroke-based painting synthesis methods usually fail to achieve good results with limited strokes because these methods use semantically irrelevant metrics to calculate the similarity between the painting and photo domains. Hence, it is hard to see meaningful semantical information from the painting. This paper proposes a painting synthesis method that uses a CLIP (Contrastive-Language-Image-Pretraining) model to build a semantically-aware metric so that the cross-domain semantic similarity is explicitly involved. To ensure the convergence of the objective function, we design a new strategy called decremental optimization. Specifically, we define painting as a set of strokes and use a neural renderer to obtain a rasterized painting by optimizing the stroke control parameters through a CLIP-based loss. The optimization process is initialized with an excessive number of brush strokes, and the number of strokes is then gradually reduced to generate paintings of varying levels of abstraction. Experiments show that our method can obtain vivid paintings, and the results are better than the comparison stroke-based painting synthesis methods when the number of strokes is limited.
  • Item
    Semi-MoreGAN: Semi-supervised Generative Adversarial Network for Mixture of Rain Removal
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Shen, Yiyang; Wang, Yongzhen; Wei, Mingqiang; Chen, Honghua; Xie, Haoran; Cheng, Gary; Wang, Fu Lee; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Real-world rain is a mixture of rain streaks and rainy haze. However, current efforts formulate image rain streaks removal and rainy haze removal as separated models, worsening the loss of image details. This paper attempts to solve the mixture of rain removal problem in a single model by estimating the scene depths of images. To this end, we propose a novel SEMIsupervised Mixture Of rain REmoval Generative Adversarial Network (Semi-MoreGAN). Unlike most of existing methods, Semi-MoreGAN is a joint learning paradigm of mixture of rain removal and depth estimation; and it effectively integrates the image features with the depth information for better rain removal. Furthermore, it leverages unpaired real-world rainy and clean images to bridge the gap between synthetic and real-world rain. Extensive experiments show clear improvements of our approach over twenty representative state-of-the-arts on both synthetic and real-world rainy images. Source code is available at https://github.com/syy-whu/Semi-MoreGAN.
  • Item
    Generative Deformable Radiance Fields for Disentangled Image Synthesis of Topology-Varying Objects
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Wang, Ziyu; Deng, Yu; Yang, Jiaolong; Yu, Jingyi; Tong, Xin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images even for topology-varying object categories. However, these methods still lack the capability to separately control the shape and appearance of the objects in the generated radiance fields. In this paper, we propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations. Our method generates deformable radiance fields, which builds the dense correspondence between the density fields of the objects and encodes their appearances in a shared template field. Our disentanglement is achieved in an unsupervised manner without introducing extra labels to previous 3D-aware GAN training. We also develop an effective image inversion scheme for reconstructing the radiance field of an object in a real monocular image and manipulating its shape and appearance. Experiments show that our method can successfully learn the generative model from unstructured monocular images and well disentangle the shape and appearance for objects (e.g., chairs) with large topological variance. The model trained on synthetic data can faithfully reconstruct the real object in a given single image and achieve high-quality texture and shape editing results.
  • Item
    Depth-Aware Shadow Removal
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Fu, Yanping; Gai, Zhenyu; Zhao, Haifeng; Zhang, Shaojie; Shan, Ying; Wu, Yang; Tang, Jin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Shadow removal from a single image is an ill-posed problem because shadow generation is affected by the complex interactions of geometry, albedo, and illumination. Most recent deep learning-based methods try to directly estimate the mapping between the non-shadow and shadow image pairs to predict the shadow-free image. However, they are not very effective for shadow images with complex shadows or messy backgrounds. In this paper, we propose a novel end-to-end depth-aware shadow removal method without using depth images, which estimates depth information from RGB images and leverages the depth feature as guidance to enhance shadow removal and refinement. The proposed framework consists of three components, including depth prediction, shadow removal, and boundary refinement. First, the depth prediction module is used to predict the corresponding depth map of the input shadow image. Then, we propose a new generative adversarial network (GAN) method integrated with depth information to remove shadows in the RGB image. Finally, we propose an effective boundary refinement framework to alleviate the artifact around boundaries after shadow removal by depth cues. We conduct experiments on several public datasets and real-world shadow images. The experimental results demonstrate the efficiency of the proposed method and superior performance against state-of-the-art methods.
  • Item
    TogetherNet: Bridging Image Restoration and Object Detection Together via Dynamic Enhancement Learning
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Wang, Yongzhen; Yan, Xuefeng; Zhang, Kaiwen; Gong, Lina; Xie, Haoran; Wang, Fu Lee; Wei, Mingqiang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Adverse weather conditions such as haze, rain, and snow often impair the quality of captured images, causing detection networks trained on normal images to generalize poorly in these scenarios. In this paper, we raise an intriguing question - if the combination of image restoration and object detection, can boost the performance of cutting-edge detectors in adverse weather conditions. To answer it, we propose an effective yet unified detection paradigm that bridges these two subtasks together via dynamic enhancement learning to discern objects in adverse weather conditions, called TogetherNet. Different from existing efforts that intuitively apply image dehazing/deraining as a pre-processing step, TogetherNet considers a multi-task joint learning problem. Following the joint learning scheme, clean features produced by the restoration network can be shared to learn better object detection in the detection network, thus helping TogetherNet enhance the detection capacity in adverse weather conditions. Besides the joint learning architecture, we design a new Dynamic Transformer Feature Enhancement module to improve the feature extraction and representation capabilities of TogetherNet. Extensive experiments on both synthetic and real-world datasets demonstrate that our TogetherNet outperforms the state-of-the-art detection approaches by a large margin both quantitatively and qualitatively. Source code is available at https://github.com/yz-wang/TogetherNet.
  • Item
    Color-mapped Noise Vector Fields for Generating Procedural Micro-patterns
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Grenier, Charline; Sauvage, Basile; Dischler, Jean-Michel; Thery, Sylvain; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Stochastic micro-patterns successfully enhance the realism of virtual scenes. Procedural models using noise combined with transfer functions are extremely efficient. However, most patterns produced today employ 1D transfer functions, which assign color, transparency, or other material attributes, based solely on the single scalar quantity of noise. Multi-dimensional transfer functions have received widespread attention in other fields, such as scientific volume rendering. But their potential has not yet been well explored for modeling micro-patterns in the field of procedural texturing. We propose a new procedural model for stochastic patterns, defined as the composition of a bi-dimensional transfer function (a.k.a. color-map) with a stochastic vector field. Our model is versatile, as it encompasses several existing procedural noises, including Gaussian noise and phasor noise. It also generates a much larger gamut of patterns, including locally structured patterns which are notoriously difficult to reproduce. We leverage the Gaussian assumption and a tiling and blending algorithm to provide real-time generation and filtering. A key contribution is a real-time approximation of the second order statistics over an arbitrary pixel footprint, which enables, in addition, the filtering of procedural normal maps. We exhibit a wide variety of results, including Gaussian patterns, profiled waves, concentric and non-concentric patterns.
  • Item
    Efficient Texture Parameterization Driven by Perceptual-Loss-on-Screen
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Sun, Haoran; Wang, Shiyi; Wu, Wenhai; Jin, Yao; Bao, Hujun; Huang, Jin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Texture mapping is a ubiquitous technique to enrich the visual effect of a mesh, which represents the desired signal (e.g. diffuse color) on the mesh to a texture image discretized by pixels through a bijective parameterization. To achieve high visual quality, large number of pixels are generally required, which brings big burden in storage, memory and transmission. We propose to use a perceptual model and a rendering procedure to measure the loss coming from the discretization, then optimize a parameterization to improve the efficiency, i.e. using fewer pixels under a comparable perceptual loss. The general perceptual model and rendering procedure can be very complicated, and non-isotropic property rooted in the square shape of pixels make the problem more difficult to solve. We adopt a two-stage strategy and use the Bayesian optimization in the triangle-wise stage. With our carefully designed weighting scheme, the mesh-wise optimization can take the triangle-wise perceptual loss into consideration under a global conforming requirement. Comparing with many parameterizations manually designed, driven by interpolation error, or driven by isotropic energy, ours can use significantly fewer pixels with comparable perception loss or vise vesa.
  • Item
    Pixel Art Adaptation for Handicraft Fabrication
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Igarashi, Yuki; Igarashi, Takeo; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Knitting and weaving patterns can be visually represented as pixel art. With hand knitting and weaving, human error (shifting, duplicating, or skipping pixels) can occur during manual fabrication. It is too costly to change already-fabricated pixels, so experts often adapt pixels that have not yet been fabricated to make the errors less visible. This paper proposes an automatic adaptation process to minimize visual artifacts. The system presents multiple adaptation possibilities to the user, who can choose the proposed adaptation or untie and re-fabricate their work. In typical handicraft fabrication, the design is complete before the start of fabrication and remains fixed during fabrication. Our system keeps updating the design during fabrication to tolerate human errors in the process. We implemented the proposed algorithm in a system that visualizes the knitting pattern, cross-stitching and bead weaving processes.
  • Item
    Shape-Guided Mixed Metro Map Layout
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Batik, Tobias; Terziadis, Soeren; Wang, Yu-Shuen; Nöllenburg, Martin; Wu, Hsiang-Yun; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Metro or transit maps, are schematic representations of transit networks to facilitate effective route-finding. These maps are often advertised on a web page or pamphlet highlighting routes from source to destination stations. To visually support such route-finding, designers often distort the layout by embedding symbolic shapes (e.g., circular routes) in order to guide readers' attention (e.g., Moscow map and Japan railway map). However, manually producing such maps is labor-intensive and the effect of shapes remains unclear. In this paper, we propose an approach to generalize such mixed metro maps that take user-defined shapes as an input. In this mixed design, lines that are used to approximate the shapes are arranged symbolically, while the remaining lines follow classical layout convention. A three-step algorithm, including (1) detecting and selecting routes for shape approximation, (2) shape and layout deformation, and (3) aligning lines on a grid, is integrated to guarantee good visual quality. Our contribution lies in the definition of the mixed metro map problem and the formulation of design criteria so that the problem can be resolved systematically using the optimization paradigm. Finally, we evaluate the performance of our approach and perform a user study to test if the embedded shapes are recognizable or reduce the map quality.
  • Item
    MoMaS: Mold Manifold Simulation for Real-time Procedural Texturing
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Maggioli, Filippo; Marin, Riccardo; Melzi, Simone; Rodolà, Emanuele; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    The slime mold algorithm has recently been under the spotlight thanks to its compelling properties studied across many disciplines like biology, computation theory, and artificial intelligence. However, existing implementations act only on planar surfaces, and no adaptation to arbitrary surfaces is available. Inspired by this gap, we propose a novel characterization of the mold algorithm to work on arbitrary curved surfaces. Our algorithm is easily parallelizable on GPUs and allows to model the evolution of millions of agents in real-time over surface meshes with several thousand triangles, while keeping the simplicity proper of the slime paradigm. We perform a comprehensive set of experiments, providing insights on stability, behavior, and sensibility to various design choices. We characterize a broad collection of behaviors with a limited set of controllable and interpretable parameters, enabling a novel family of heterogeneous and high-quality procedural textures. The appearance and complexity of these patterns are well-suited to diverse materials and scopes, and we add another layer of generalization by allowing different mold species to compete and interact in parallel.
  • Item
    Large-Scale Worst-Case Topology Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhang, Di; Zhai, Xiaoya; Fu, Xiao-Ming; Wang, Heming; Liu, Ligang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    We propose a novel topology optimization method to efficiently minimize the maximum compliance for a high-resolution model bearing uncertain external loads. Central to this approach is a modified power method that can quickly compute the maximum eigenvalue to evaluate the worst-case compliance, enabling our method to be suitable for large-scale topology optimization. After obtaining the worst-case compliance, we use the adjoint variable method to perform the sensitivity analysis for updating the density variables. By iteratively computing the worst-case compliance, performing the sensitivity analysis, and updating the density variables, our algorithm achieves the optimized models with high efficiency. The capability and feasibility of our approach are demonstrated over various large-scale models. Typically, for a model of size 512×170×170 and 69934 loading nodes, our method took about 50 minutes on a desktop computer with an NVIDIA GTX 1080Ti graphics card with 11 GB memory.
  • Item
    Spatio-temporal Keyframe Control of Traffic Simulation using Coarse-to-Fine Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Han, Yi; Wang, He; Jin, Xiaogang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    We present a novel traffic trajectory editing method which uses spatio-temporal keyframes to control vehicles during the simulation to generate desired traffic trajectories. By taking self-motivation, path following and collision avoidance into account, the proposed force-based traffic simulation framework updates vehicle's motions in both the Frenet coordinates and the Cartesian coordinates. With the way-points from users, lane-level navigation can be generated by reference path planning. With a given keyframe, the coarse-to-fine optimization is proposed to efficiently generate the plausible trajectory which can satisfy the spatio-temporal constraints. At first, a directed state-time graph constructed along the reference path is used to search for a coarse-grained trajectory by mapping the keyframe as the goal. Then, using the information extracted from the coarse trajectory as initialization, adjoint-based optimization is applied to generate a finer trajectory with smooth motions based on our force-based simulation. We validate our method with extensive experiments.
  • Item
    NSTO: Neural Synthesizing Topology Optimization for Modulated Structure Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhong, Shengze; Punpongsanon, Parinya; Iwai, Daisuke; Sato, Kosuke; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Nature evolves structures like honeycombs at optimized performance with limited material. These efficient structures can be artificially created with the collaboration of structural topology optimization and additive manufacturing. However, the extensive computation cost of topology optimization causes low mesh resolution, long solving time, and rough boundaries that fail to match the requirements for meeting the growing personal fabrication demands and printing capability. Therefore, we propose the neural synthesizing topology optimization that leverages a self-supervised coordinate-based network to optimize structures with significantly shorter computation time, where the network encodes the structural material layout as an implicit function of coordinates. Continuous solution space is further generated from optimization tasks under varying boundary conditions or constraints for users' instant inference of novel solutions. We demonstrate the system's efficacy for a broad usage scenario through numerical experiments and 3D printing.
  • Item
    Efficient and Stable Simulation of Inextensible Cosserat Rods by a Compact Representation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhao, Chongyao; Lin, Jinkeng; Wang, Tianyu; Bao, Hujun; Huang, Jin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Piecewise linear inextensible Cosserat rods are usually represented by Cartesian coordinates of vertices and quaternions on the segments. Such representations use excessive degrees of freedom (DOFs), and need many additional constraints, which causes unnecessary numerical difficulties and computational burden for simulation. We propose a simple yet compact representation that exactly matches the intrinsic DOFs and naturally satisfies all such constraints. Specifically, viewing a rod as a chain of rigid segments, we encode its shape as the Cartesian coordinates of its root vertex, and use axis-angle representation for the material frame on each segment. Under our representation, the Hessian of the implicit time-stepping has special non-zero patterns. Exploiting such specialties, we can solve the associated linear equations in nearly linear complexity. Furthermore, we carefully designed a preconditioner, which is proved to be always symmetric positive-definite and accelerates the PCG solver in one or two orders of magnitude compared with the widely used block-diagonal one. Compared with other technical choices including Super-Helices, a specially designed compact representation for inextensible Cosserat rods, our method achieves better performance and stability, and can simulate an inextensible Cosserat rod with hundreds of vertices and tens of collisions in real time under relatively large time steps.
  • Item
    Learning 3D Shape Aesthetics Globally and Locally
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chen, Minchan; Lau, Manfred; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    There exist previous works in computing the visual aesthetics of 3D shapes ''globally'', where the term global means that shape aesthetics data are collected for whole 3D shapes and then used to compute the aesthetics of whole 3D shapes. In this paper, we introduce a novel method that takes such ''global'' shape aesthetics data, and learn both a ''global'' shape aesthetics measure that computes aesthetics scores for whole 3D shapes, and a ''local'' shape aesthetics measure that computes to what extent a local region on the 3D shape surface contributes to the whole shape's aesthetics. These aesthetics measures are learned, and hence do not consider existing handcrafted notions of what makes a 3D shape aesthetic. We take a dataset of global pairwise shape aesthetics, where humans compares between pairs of shapes and say which shape from each pair is more aesthetic. Our solution proposes a point-based neural network that takes a 3D shape represented by surface patches as input and jointly outputs its global aesthetics score and a local aesthetics map. To build connections between global and local aesthetics, we embed the global and local features into the same latent space and then output scores with the weights-shared aesthetics predictors. Furthermore, we designed three loss functions to supervise the training jointly. We demonstrate the shape aesthetics results globally and locally to show that our framework can make good global aesthetics predictions while the predicted aesthetics maps are consistent with human perception. In addition, we present several applications enabled by our local aesthetics metric.
  • Item
    Eye-Tracking-Based Prediction of User Experience in VR Locomotion Using Machine Learning
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Gao, Hong; Kasneci, Enkelejda; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    VR locomotion is one of the most important design features of VR applications and is widely studied. When evaluating locomotion techniques, user experience is usually the first consideration, as it provides direct insights into the usability of the locomotion technique and users' thoughts about it. In the literature, user experience is typically measured with post-hoc questionnaires or surveys, while users' behavioral (i.e., eye-tracking) data during locomotion, which can reveal deeper subconscious thoughts of users, has rarely been considered and thus remains to be explored. To this end, we investigate the feasibility of classifying users experiencing VR locomotion into L-UE and H-UE (i.e., low- and high-user-experience groups) based on eye-tracking data alone. To collect data, a user study was conducted in which participants navigated a virtual environment using five locomotion techniques and their eye-tracking data was recorded. A standard questionnaire assessing the usability and participants' perception of the locomotion technique was used to establish the ground truth of the user experience. We trained our machine learning models on the eye-tracking features extracted from the time-series data using a sliding window approach. The best random forest model achieved an average accuracy of over 0.7 in 50 runs. Moreover, the SHapley Additive exPlanations (SHAP) approach uncovered the underlying relationships between eye-tracking features and user experience, and these findings were further supported by the statistical results. Our research provides a viable tool for assessing user experience with VR locomotion, which can further drive the improvement of locomotion techniques. Moreover, our research benefits not only VR locomotion, but also VR systems whose design needs to be improved to provide a good user experience.
  • Item
    Implicit Neural Deformation for Sparse-View Face Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Li, Moran; Huang, Haibin; Zheng, Yi; Li, Mengtian; Sang, Nong; Ma, Chongyang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    In this work, we present a new method for 3D face reconstruction from sparse-view RGB images. Unlike previous methods which are built upon 3D morphable models (3DMMs) with limited details, we leverage an implicit representation to encode rich geometric features. Our overall pipeline consists of two major components, including a geometry network, which learns a deformable neural signed distance function (SDF) as the 3D face representation, and a rendering network, which learns to render on-surface points of the neural SDF to match the input images via self-supervised optimization. To handle in-the-wild sparse-view input of the same target with different expressions at test time, we propose residual latent code to effectively expand the shape space of the learned implicit face representation as well as a novel view-switch loss to enforce consistency among different views. Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
  • Item
    Learning Dynamic 3D Geometry and Texture for Video Face Swapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Otto, Christopher; Naruniec, Jacek; Helminger, Leonhard; Etterlin, Thomas; Mignone, Graziana; Chandran, Prashanth; Zoss, Gaspard; Schroers, Christopher; Gross, Markus; Gotardo, Paulo; Bradley, Derek; Weber, Romann; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Face swapping is the process of applying a source actor's appearance to a target actor's performance in a video. This is a challenging visual effect that has seen increasing demand in film and television production. Recent work has shown that datadriven methods based on deep learning can produce compelling effects at production quality in a fraction of the time required for a traditional 3D pipeline. However, the dominant approach operates only on 2D imagery without reference to the underlying facial geometry or texture, resulting in poor generalization under novel viewpoints and little artistic control. Methods that do incorporate geometry rely on pre-learned facial priors that do not adapt well to particular geometric features of the source and target faces. We approach the problem of face swapping from the perspective of learning simultaneous convolutional facial autoencoders for the source and target identities, using a shared encoder network with identity-specific decoders. The key novelty in our approach is that each decoder first lifts the latent code into a 3D representation, comprising a dynamic face texture and a deformable 3D face shape, before projecting this 3D face back onto the input image using a differentiable renderer. The coupled autoencoders are trained only on videos of the source and target identities, without requiring 3D supervision. By leveraging the learned 3D geometry and texture, our method achieves face swapping with higher quality than when using offthe- shelf monocular 3D face reconstruction, and overall lower FID score than state-of-the-art 2D methods. Furthermore, our 3D representation allows for efficient artistic control over the result, which can be hard to achieve with existing 2D approaches.
  • Item
    BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Yang, Xingchao; Taketomi, Takafumi; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    We propose BareSkinNet, a novel method that simultaneously removes makeup and lighting influences from the face image. Our method leverages a 3D morphable model and does not require a reference clean face image or a specified light condition. By combining the process of 3D face reconstruction, we can easily obtain 3D geometry and coarse 3D textures. Using this information, we can infer normalized 3D face texture maps (diffuse, normal, roughness, and specular) by an image-translation network. Consequently, reconstructed 3D face textures without undesirable information will significantly benefit subsequent processes, such as re-lighting or re-makeup. In experiments, we show that BareSkinNet outperforms state-of-the-art makeup removal methods. In addition, our method is remarkably helpful in removing makeup to generate consistent high-fidelity texture maps, which makes it extendable to many realistic face generation applications. It can also automatically build graphic assets of face makeup images before and after with corresponding 3D data. This will assist artists in accelerating their work, such as 3D makeup avatar creation.
  • Item
    ShadowPatch: Shadow Based Segmentation for Reliable Depth Discontinuities in Photometric Stereo
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Heep, Moritz; Zell, Eduard; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Photometric stereo is a well-established method with outstanding traits to recover surface details and material properties, like surface albedo or even specularity. However, while the surface is locally well-defined, computing absolute depth by integrating surface normals is notoriously difficult. Integration errors can be introduced and propagated by numerical inaccuracies from inter-reflection of light or non-Lambertian surfaces. But especially ignoring depth discontinuities for overlapping or disconnected objects, will introduce strong distortion artefacts. During the acquisition process the object is lit from different positions and self-shadowing is in general considered as an unavoidable drawback, complicating the numerical estimation of normals. However, we observe that shadow boundaries correlate strongly with depth discontinuities and exploit the visual structure introduced by self-shadowing to create a consistent image segmentation of continuous surfaces. In order to make depth estimation more robust, we deeply integrate photometric stereo with depth-from-stereo. Having obtained a shadow based segmentation of continuous surfaces, allows us to reduce the computational cost for correspondence search in depth-from-stereo. To speed-up computation further, we merge segments into larger meta-segments during an iterative depth optimization. The reconstruction error of our method is equal or smaller than previous work, and reconstruction results are characterized by robust handling of depth-discontinuities, without any smearing artifacts.