42-Issue 6

Permanent URI for this collection

ORIGINAL ARTICLES
Texture Inpainting for Photogrammetric Models
Maggiordomo, A.; Cignoni, P.; Tarini, M.
Multi‐agent Path Planning with Heterogenous Interactions in Tight Spaces
Modi, V.; Chen, Y.; Madan, A.; Sueda, S.; Levin, D. I. W.
Line Drawing Vectorization via Coarse‐to‐Fine Curve Network Optimization
Bao, Bin; Fu, Hongbo
tachyon: Efficient Shared Memory Parallel Computation of Extremum Graphs
Ande, Abhijath; Subhash, Varshini; Natarajan, Vijay
Exploration of Player Behaviours from Broadcast Badminton Videos
Chen, Wei‐Ting; Wu, Hsiang‐Yun; Shih, Yun‐An; Wang, Chih‐Chuan; Wang, Yu‐Shuen
Break and Splice: A Statistical Method for Non‐Rigid Point Cloud Registration
Gao, Qinghong; Zhao, Yan; Xi, Long; Tang, Wen; Wan, Tao Ruan
Feature Representation for High‐resolution Clothed Human Reconstruction
Pu, Juncheng; Liu, Li; Fu, Xiaodong; Su, Zhuo; Liu, Lijun; Peng, Wei
3D Generative Model Latent Disentanglement via Local Eigenprojection
Foti, Simone; Koo, Bongjin; Stoyanov, Danail; Clarkson, Matthew J.
Immersive Free‐Viewpoint Panorama Rendering from Omnidirectional Stereo Video
Mühlhausen, Moritz; Kappel, Moritz; Kassubeck, Marc; Wöhler, Leslie; Grogorick, Steve; Castillo, Susana; Eisemann, Martin; Magnor, Marcus
Adversarial Interactive Cartoon Sketch Colourization with Texture Constraint and Auxiliary Auto‐Encoder
Liu, Xiaoyu; Zhu, Shaoqiang; Zeng, Yao; Zhang, Junsong
Efficient Hardware Acceleration of Robust Volumetric Light Transport Simulation
Moonen, Nol; Jalba, Andrei C.
Garment Model Extraction from Clothed Mannequin Scan
Gao, Qiqi; Taketomi, Takafumi
Visually Abstracting Event Sequences as Double Trees Enriched with Category‐Based Comparison
Krause, Cedric; Agarwal, Shivam; Burch, Michael; Beck, Fabian
A Survey of Personalized Interior Design
Wang, Y.T.; Liang, C.; Huai, N.; Chen, J.; Zhang, C.J.
It's about Time: Analytical Time Periodization
Andrienko, Natalia; Andrienko, Gennady
MesoGAN: Generative Neural Reflectance Shells
Diolatzis, Stavros; Novak, Jan; Rousselle, Fabrice; Granskog, Jonathan; Aittala, Miika; Ramamoorthi, Ravi; Drettakis, George
Model‐based Crowd Behaviours in Human‐solution Space
Xiang, Wei; Wang, He; Zhang, Yuqing; Yip, Milo K.; Jin, Xiaogang
Harmonized Portrait‐Background Image Composition
Wang, Yijiang; Li, Yuqi; Wang, Chong; Ye, Xulun
Recurrent Motion Refiner for Locomotion Stitching
Kim, Haemin; Cho, Kyungmin; Hong, Seokhyeon; Noh, Junyong
EvIcon: Designing High‐Usability Icon with Human‐in‐the‐loop Exploration and IconCLIP
Shen, I‐Chao; Cherng, Fu‐Yin; Igarashi, Takeo; Lin, Wen‐Chieh; Chen, Bing‐Yu
Episodes and Topics in Multivariate Temporal Data
Andrienko, Natalia; Andrienko, Gennady; Shirato, Gota
Distributed Poisson Surface Reconstruction
Kazhdan, M.; Hoppe, H.
A Semi‐Procedural Convolutional Material Prior
Zhou, Xilong; Hašan, Miloš; Deschaintre, Valentin; Guerrero, Paul; Sunkavalli, Kalyan; Kalantari, Nima Khademi
Numerical Coarsening with Neural Shape Functions
Ni, Ning; Xu, Qingyu; Li, Zhehao; Fu, Xiao‐Ming; Liu, Ligang
Two‐Step Training: Adjustable Sketch Colourization via Reference Image and Text Tag
Yan, Dingkun; Ito, Ryogo; Moriai, Ryo; Saito, Suguru
Reference‐based Screentone Transfer via Pattern Correspondence and Regularization
Li, Zhansheng; Zhao, Nanxuan; Wu, Zongwei; Dai, Yihua; Wang, Junle; Jing, Yanqing; He, Shengfeng
OaIF: Occlusion‐Aware Implicit Function for Clothed Human Re‐construction
Tan, Yudi; Guan, Boliang; Zhou, Fan; Su, Zhuo
ROI Scissor: Interactive Segmentation of Feature Region of Interest in a Triangular Mesh
Moon, Ji‐Hye; Ha, Yujin; Park, Sanghun; Kim, Myung‐Soo; Yoon, Seung‐Hyun
Accompany Children's Learning for You: An Intelligent Companion Learning System
Qian, Jiankai; Jiang, Xinbo; Ma, Jiayao; Li, Jiachen; Gao, Zhenzhen; Qin, Xueying
State of the Art of Molecular Visualization in Immersive Virtual Environments
Kuťák, David; Vázquez, Pere‐Pau; Isenberg, Tobias; Krone, Michael; Baaden, Marc; Byška, Jan; Kozlíková, Barbora; Miao, Haichao
Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging
Méndez, J.; Alrabbaa, C.; Koopmann, P.; Langner, R.; Baader, F.; Dachselt, R.
Visual Parameter Space Exploration in Time and Space
Piccolotto, Nikolaus; Bögl, Markus; Miksch, Silvia
Faster Edge‐Path Bundling through Graph Spanners
Wallinger, Markus; Archambault, Daniel; Auber, David; Nöllenburg, Martin; Peltonen, Jaakko
Are We There Yet? A Roadmap of Network Visualization from Surveys to Task Taxonomies
Filipov, Velitchko; Arleo, Alessio; Miksch, Silvia
Multilevel Robustness for 2D Vector Field Feature Tracking, Selection and Comparison
Yan, Lin; Ullrich, Paul Aaron; Van Roekel, Luke P.; Wang, Bei; Guo, Hanqi
iFUNDit: Visual Profiling of Fund Investment Styles
Zhang, R.; Ku, B. K.; Wang, Y.; Yue, X.; Liu, S.; Li, K.; Qu, H.
A Characterization of Interactive Visual Data Stories With a Spatio‐Temporal Context
Mayer, Benedikt; Steinhauer, Nastasja; Preim, Bernhard; Meuschke, Monique
Smooth Transitions Between Parallel Coordinates and Scatter Plots via Polycurve Star Plots
Kiesel, Dora; Riehmann, Patrick; Froehlich, Bernd
Deep Learning for Scene Flow Estimation on Point Clouds: A Survey and Prospective Trends
Li, Zhiqi; Xiang, Nan; Chen, Honghua; Zhang, Jianjun; Yang, Xiaosong
Triangle Influence Supersets for Fast Distance Computation
Pujol, Eduard; Chica, Antonio
ARAP Revisited Discretizing the Elastic Energy using Intrinsic Voronoi Cells
Finnendahl, Ugo; Schwartz, Matthias; Alexa, Marc
CORRIGENDUM
Corrigendum to “Making Procedural Water Waves Boundary‐aware”, “Primal/Dual Descent Methods for Dynamics”, and “Detailed Rigid Body Simulation with Extended Position Based Dynamics”
Issue Information
Issue Information

BibTeX (42-Issue 6)
                
@article{
10.1111:cgf.14735,
journal = {Computer Graphics Forum}, title = {{
Texture Inpainting for Photogrammetric Models}},
author = {
Maggiordomo, A.
and
Cignoni, P.
and
Tarini, M.
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14735}
}
                
@article{
10.1111:cgf.14737,
journal = {Computer Graphics Forum}, title = {{
Multi‐agent Path Planning with Heterogenous Interactions in Tight Spaces}},
author = {
Modi, V.
and
Chen, Y.
and
Madan, A.
and
Sueda, S.
and
Levin, D. I. W.
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14737}
}
                
@article{
10.1111:cgf.14784,
journal = {Computer Graphics Forum}, title = {{
tachyon: Efficient Shared Memory Parallel Computation of Extremum Graphs}},
author = {
Ande, Abhijath
and
Subhash, Varshini
and
Natarajan, Vijay
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14784}
}
                
@article{
10.1111:cgf.14787,
journal = {Computer Graphics Forum}, title = {{
Line Drawing Vectorization via Coarse‐to‐Fine Curve Network Optimization}},
author = {
Bao, Bin
and
Fu, Hongbo
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14787}
}
                
@article{
10.1111:cgf.14788,
journal = {Computer Graphics Forum}, title = {{
Break and Splice: A Statistical Method for Non‐Rigid Point Cloud Registration}},
author = {
Gao, Qinghong
and
Zhao, Yan
and
Xi, Long
and
Tang, Wen
and
Wan, Tao Ruan
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14788}
}
                
@article{
10.1111:cgf.14792,
journal = {Computer Graphics Forum}, title = {{
Feature Representation for High‐resolution Clothed Human Reconstruction}},
author = {
Pu, Juncheng
and
Liu, Li
and
Fu, Xiaodong
and
Su, Zhuo
and
Liu, Lijun
and
Peng, Wei
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14792}
}
                
@article{
10.1111:cgf.14793,
journal = {Computer Graphics Forum}, title = {{
3D Generative Model Latent Disentanglement via Local Eigenprojection}},
author = {
Foti, Simone
and
Koo, Bongjin
and
Stoyanov, Danail
and
Clarkson, Matthew J.
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14793}
}
                
@article{
10.1111:cgf.14786,
journal = {Computer Graphics Forum}, title = {{
Exploration of Player Behaviours from Broadcast Badminton Videos}},
author = {
Chen, Wei‐Ting
and
Wu, Hsiang‐Yun
and
Shih, Yun‐An
and
Wang, Chih‐Chuan
and
Wang, Yu‐Shuen
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14786}
}
                
@article{
10.1111:cgf.14805,
journal = {Computer Graphics Forum}, title = {{
Visually Abstracting Event Sequences as Double Trees Enriched with Category‐Based Comparison}},
author = {
Krause, Cedric
and
Agarwal, Shivam
and
Burch, Michael
and
Beck, Fabian
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14805}
}
                
@article{
10.1111:cgf.14797,
journal = {Computer Graphics Forum}, title = {{
Adversarial Interactive Cartoon Sketch Colourization with Texture Constraint and Auxiliary Auto‐Encoder}},
author = {
Liu, Xiaoyu
and
Zhu, Shaoqiang
and
Zeng, Yao
and
Zhang, Junsong
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14797}
}
                
@article{
10.1111:cgf.14796,
journal = {Computer Graphics Forum}, title = {{
Immersive Free‐Viewpoint Panorama Rendering from Omnidirectional Stereo Video}},
author = {
Mühlhausen, Moritz
and
Kappel, Moritz
and
Kassubeck, Marc
and
Wöhler, Leslie
and
Grogorick, Steve
and
Castillo, Susana
and
Eisemann, Martin
and
Magnor, Marcus
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14796}
}
                
@article{
10.1111:cgf.14802,
journal = {Computer Graphics Forum}, title = {{
Efficient Hardware Acceleration of Robust Volumetric Light Transport Simulation}},
author = {
Moonen, Nol
and
Jalba, Andrei C.
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14802}
}
                
@article{
10.1111:cgf.14804,
journal = {Computer Graphics Forum}, title = {{
Garment Model Extraction from Clothed Mannequin Scan}},
author = {
Gao, Qiqi
and
Taketomi, Takafumi
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14804}
}
                
@article{
10.1111:cgf.14844,
journal = {Computer Graphics Forum}, title = {{
A Survey of Personalized Interior Design}},
author = {
Wang, Y.T.
and
Liang, C.
and
Huai, N.
and
Chen, J.
and
Zhang, C.J.
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14844}
}
                
@article{
10.1111:cgf.14919,
journal = {Computer Graphics Forum}, title = {{
Model‐based Crowd Behaviours in Human‐solution Space}},
author = {
Xiang, Wei
and
Wang, He
and
Zhang, Yuqing
and
Yip, Milo K.
and
Jin, Xiaogang
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14919}
}
                
@article{
10.1111:cgf.14846,
journal = {Computer Graphics Forum}, title = {{
MesoGAN: Generative Neural Reflectance Shells}},
author = {
Diolatzis, Stavros
and
Novak, Jan
and
Rousselle, Fabrice
and
Granskog, Jonathan
and
Aittala, Miika
and
Ramamoorthi, Ravi
and
Drettakis, George
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14846}
}
                
@article{
10.1111:cgf.14845,
journal = {Computer Graphics Forum}, title = {{
It's about Time: Analytical Time Periodization}},
author = {
Andrienko, Natalia
and
Andrienko, Gennady
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14845}
}
                
@article{
10.1111:cgf.14924,
journal = {Computer Graphics Forum}, title = {{
EvIcon: Designing High‐Usability Icon with Human‐in‐the‐loop Exploration and IconCLIP}},
author = {
Shen, I‐Chao
and
Cherng, Fu‐Yin
and
Igarashi, Takeo
and
Lin, Wen‐Chieh
and
Chen, Bing‐Yu
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14924}
}
                
@article{
10.1111:cgf.14925,
journal = {Computer Graphics Forum}, title = {{
Distributed Poisson Surface Reconstruction}},
author = {
Kazhdan, M.
and
Hoppe, H.
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14925}
}
                
@article{
10.1111:cgf.14921,
journal = {Computer Graphics Forum}, title = {{
Harmonized Portrait‐Background Image Composition}},
author = {
Wang, Yijiang
and
Li, Yuqi
and
Wang, Chong
and
Ye, Xulun
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14921}
}
                
@article{
10.1111:cgf.14920,
journal = {Computer Graphics Forum}, title = {{
Recurrent Motion Refiner for Locomotion Stitching}},
author = {
Kim, Haemin
and
Cho, Kyungmin
and
Hong, Seokhyeon
and
Noh, Junyong
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14920}
}
                
@article{
10.1111:cgf.14926,
journal = {Computer Graphics Forum}, title = {{
Episodes and Topics in Multivariate Temporal Data}},
author = {
Andrienko, Natalia
and
Andrienko, Gennady
and
Shirato, Gota
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14926}
}
                
@article{
10.1111:cgf.14798,
journal = {Computer Graphics Forum}, title = {{
OaIF: Occlusion‐Aware Implicit Function for Clothed Human Re‐construction}},
author = {
Tan, Yudi
and
Guan, Boliang
and
Zhou, Fan
and
Su, Zhuo
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14798}
}
                
@article{
10.1111:cgf.14800,
journal = {Computer Graphics Forum}, title = {{
Reference‐based Screentone Transfer via Pattern Correspondence and Regularization}},
author = {
Li, Zhansheng
and
Zhao, Nanxuan
and
Wu, Zongwei
and
Dai, Yihua
and
Wang, Junle
and
Jing, Yanqing
and
He, Shengfeng
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14800}
}
                
@article{
10.1111:cgf.14736,
journal = {Computer Graphics Forum}, title = {{
Numerical Coarsening with Neural Shape Functions}},
author = {
Ni, Ning
and
Xu, Qingyu
and
Li, Zhehao
and
Fu, Xiao‐Ming
and
Liu, Ligang
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14736}
}
                
@article{
10.1111:cgf.14781,
journal = {Computer Graphics Forum}, title = {{
A Semi‐Procedural Convolutional Material Prior}},
author = {
Zhou, Xilong
and
Hašan, Miloš
and
Deschaintre, Valentin
and
Guerrero, Paul
and
Sunkavalli, Kalyan
and
Kalantari, Nima Khademi
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14781}
}
                
@article{
10.1111:cgf.14791,
journal = {Computer Graphics Forum}, title = {{
Two‐Step Training: Adjustable Sketch Colourization via Reference Image and Text Tag}},
author = {
Yan, Dingkun
and
Ito, Ryogo
and
Moriai, Ryo
and
Saito, Suguru
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14791}
}
                
@article{
10.1111:cgf.14803,
journal = {Computer Graphics Forum}, title = {{
ROI Scissor: Interactive Segmentation of Feature Region of Interest in a Triangular Mesh}},
author = {
Moon, Ji‐Hye
and
Ha, Yujin
and
Park, Sanghun
and
Kim, Myung‐Soo
and
Yoon, Seung‐Hyun
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14803}
}
                
@article{
10.1111:cgf.14862,
journal = {Computer Graphics Forum}, title = {{
Accompany Children's Learning for You: An Intelligent Companion Learning System}},
author = {
Qian, Jiankai
and
Jiang, Xinbo
and
Ma, Jiayao
and
Li, Jiachen
and
Gao, Zhenzhen
and
Qin, Xueying
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14862}
}
                
@article{
10.1111:cgf.14738,
journal = {Computer Graphics Forum}, title = {{
State of the Art of Molecular Visualization in Immersive Virtual Environments}},
author = {
Kuťák, David
and
Vázquez, Pere‐Pau
and
Isenberg, Tobias
and
Krone, Michael
and
Baaden, Marc
and
Byška, Jan
and
Kozlíková, Barbora
and
Miao, Haichao
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14738}
}
                
@article{
10.1111:cgf.14730,
journal = {Computer Graphics Forum}, title = {{
Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging}},
author = {
Méndez, J.
and
Alrabbaa, C.
and
Koopmann, P.
and
Langner, R.
and
Baader, F.
and
Dachselt, R.
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14730}
}
                
@article{
10.1111:cgf.14785,
journal = {Computer Graphics Forum}, title = {{
Visual Parameter Space Exploration in Time and Space}},
author = {
Piccolotto, Nikolaus
and
Bögl, Markus
and
Miksch, Silvia
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14785}
}
                
@article{
10.1111:cgf.14789,
journal = {Computer Graphics Forum}, title = {{
Faster Edge‐Path Bundling through Graph Spanners}},
author = {
Wallinger, Markus
and
Archambault, Daniel
and
Auber, David
and
Nöllenburg, Martin
and
Peltonen, Jaakko
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14789}
}
                
@article{
10.1111:cgf.14861,
journal = {Computer Graphics Forum}, title = {{
Triangle Influence Supersets for Fast Distance Computation}},
author = {
Pujol, Eduard
and
Chica, Antonio
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14861}
}
                
@article{
10.1111:cgf.14922,
journal = {Computer Graphics Forum}, title = {{
A Characterization of Interactive Visual Data Stories With a Spatio‐Temporal Context}},
author = {
Mayer, Benedikt
and
Steinhauer, Nastasja
and
Preim, Bernhard
and
Meuschke, Monique
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14922}
}
                
@article{
10.1111:cgf.14795,
journal = {Computer Graphics Forum}, title = {{
Deep Learning for Scene Flow Estimation on Point Clouds: A Survey and Prospective Trends}},
author = {
Li, Zhiqi
and
Xiang, Nan
and
Chen, Honghua
and
Zhang, Jianjun
and
Yang, Xiaosong
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14795}
}
                
@article{
10.1111:cgf.14794,
journal = {Computer Graphics Forum}, title = {{
Are We There Yet? A Roadmap of Network Visualization from Surveys to Task Taxonomies}},
author = {
Filipov, Velitchko
and
Arleo, Alessio
and
Miksch, Silvia
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14794}
}
                
@article{
10.1111:cgf.14923,
journal = {Computer Graphics Forum}, title = {{
Smooth Transitions Between Parallel Coordinates and Scatter Plots via Polycurve Star Plots}},
author = {
Kiesel, Dora
and
Riehmann, Patrick
and
Froehlich, Bernd
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14923}
}
                
@article{
10.1111:cgf.14799,
journal = {Computer Graphics Forum}, title = {{
Multilevel Robustness for 2D Vector Field Feature Tracking, Selection and Comparison}},
author = {
Yan, Lin
and
Ullrich, Paul Aaron
and
Van Roekel, Luke P.
and
Wang, Bei
and
Guo, Hanqi
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14799}
}
                
@article{
10.1111:cgf.14806,
journal = {Computer Graphics Forum}, title = {{
iFUNDit: Visual Profiling of Fund Investment Styles}},
author = {
Zhang, R.
and
Ku, B. K.
and
Wang, Y.
and
Yue, X.
and
Liu, S.
and
Li, K.
and
Qu, H.
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14806}
}
                
@article{
10.1111:cgf.14790,
journal = {Computer Graphics Forum}, title = {{
ARAP Revisited Discretizing the Elastic Energy using Intrinsic Voronoi Cells}},
author = {
Finnendahl, Ugo
and
Schwartz, Matthias
and
Alexa, Marc
}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14790}
}
                
@article{
10.1111:cgf.14801,
journal = {Computer Graphics Forum}, title = {{
Corrigendum to “Making Procedural Water Waves Boundary‐aware”, “Primal/Dual Descent Methods for Dynamics”, and “Detailed Rigid Body Simulation with Extended Position Based Dynamics”}},
author = {}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14801}
}
                
@article{
10.1111:cgf.14570,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2023},
publisher = {
© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14570}
}

Browse

Recent Submissions

Now showing 1 - 43 of 43
  • Item
    Texture Inpainting for Photogrammetric Models
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Maggiordomo, A.; Cignoni, P.; Tarini, M.; Hauser, Helwig and Alliez, Pierre
    We devise a technique designed to remove the texturing artefacts that are typical of 3D models representing real‐world objects, acquired by photogrammetric techniques. Our technique leverages the recent advancements in inpainting of natural colour images, adapting them to the specific context. A neural network, modified and trained for our purposes, replaces the texture areas containing the defects, substituting them with new plausible patches of texels, reconstructed from the surrounding surface texture. We train and apply the network model on locally reparametrized texture patches, so to provide an input that simplifies the learning process, because it avoids any texture seams, unused texture areas, background, depth jumps and so on. We automatically extract appropriate training data from real‐world datasets. We show two applications of the resulting method: one, as a fully automatic tool, addressing all problems that can be detected by analysing the UV‐map of the input model; and another, as an interactive semi‐automatic tool, presented to the user as a 3D ‘fixing’ brush that has the effect of removing artefacts from any zone the users paints on. We demonstrate our method on a variety of real‐world inputs and provide a reference usable implementation.
  • Item
    Multi‐agent Path Planning with Heterogenous Interactions in Tight Spaces
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Modi, V.; Chen, Y.; Madan, A.; Sueda, S.; Levin, D. I. W.; Hauser, Helwig and Alliez, Pierre
    By starting with the assumption that motion is fundamentally a decision making problem, we use the world‐line concept from Special Relativity as the inspiration for a novel multi‐agent path planning method. We have identified a particular set of problems that have so far been overlooked by previous works. We present our solution for the global path planning problem for each agent and ensure smooth local collision avoidance for each pair of agents in the scene. We accomplish this by modelling the collision‐free trajectories of the agents through 2D space and time as rods in 3D. We obtain smooth trajectories by solving a non‐linear optimization problem with a quasi‐Newton interior point solver, initializing the solver with a non‐intersecting configuration from a modified Dijkstra's algorithm. This space–time formulation allows us to simulate previously ignored phenomena such as highly heterogeneous interactions in very constrained environments. It also provides a solution for scenes with unnaturally symmetric agent alignments without the need for jittering agent positions or velocities.
  • Item
    tachyon: Efficient Shared Memory Parallel Computation of Extremum Graphs
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Ande, Abhijath; Subhash, Varshini; Natarajan, Vijay; Hauser, Helwig and Alliez, Pierre
    The extremum graph is a succinct representation of the Morse decomposition of a scalar field. It has increasingly become a useful data structure that supports topological feature‐directed visualization of 2D/3D scalar fields, and enables dimensionality reduction together with exploratory analysis of high‐dimensional scalar fields. Current methods that employ the extremum graph compute it either using a simple sequential algorithm for computing the Morse decomposition or by computing the more detailed Morse–Smale complex. Both approaches are typically limited to two and three‐dimensional scalar fields. We describe a GPU–CPU hybrid parallel algorithm for computing the extremum graph of scalar fields in all dimensions. The proposed shared memory algorithm utilizes both fine‐grained parallelism and task parallelism to achieve efficiency. An open source software library, , that implements the algorithm exhibits superior performance and good scaling behaviour.
  • Item
    Line Drawing Vectorization via Coarse‐to‐Fine Curve Network Optimization
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Bao, Bin; Fu, Hongbo; Hauser, Helwig and Alliez, Pierre
    Vectorizing line drawings is a fundamental component of the workflow in various applications such as graphic design and computer animation. A practical vectorization tool is desired to produce high‐quality curves that are faithful to the original inputs and close to the connectivity of human drawings. The existing line vectorization approaches either suffer from low geometry accuracy or incorrect connectivity for noisy inputs or detailed complex drawings. We propose a novel line drawing vectorization framework based on coarse‐to‐fine curve network optimization. Our technique starts with an initial curve network generated by an existing tracing method. It then performs a global optimization which fits the curve network to image centrelines. Finally, our method performs a finer optimization in local junction regions to achieve better connectivity and curve geometry around junctions. We qualitatively and quantitatively evaluate our system on line drawings with varying image quality and shape complexity, and show that our technique outperforms existing works in terms of curve quality and computational time.
  • Item
    Break and Splice: A Statistical Method for Non‐Rigid Point Cloud Registration
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Gao, Qinghong; Zhao, Yan; Xi, Long; Tang, Wen; Wan, Tao Ruan; Hauser, Helwig and Alliez, Pierre
    3D object matching and registration on point clouds are widely used in computer vision. However, most existing point cloud registration methods have limitations in handling non‐rigid point sets or topology changes (. connections and separations). As a result, critical characteristics such as large inter‐frame motions of the point clouds may not be accurately captured. This paper proposes a statistical algorithm for non‐rigid point sets registration, addressing the challenge of handling topology changes without the need to estimate correspondence. The algorithm uses a novel  framework to treat the non‐rigid registration challenges as a reproduction process and a Dirichlet Process Gaussian Mixture Model (DPGMM) to cluster a pair of point sets. Labels are assigned to the source point set with an iterative classification procedure, and the source is registered to the target with the same labels using the Bayesian Coherent Point Drift (BCPD) method. The results demonstrate that the proposed approach achieves lower registration errors and efficiently registers point sets undergoing topology changes and large inter‐frame motions. The proposed approach is evaluated on several data sets using various qualitative and quantitative metrics. The results demonstrate that the  framework outperforms state‐of‐the‐art methods, achieving an average error reduction of about 60% and a registration time reduction of about 57.8%.
  • Item
    Feature Representation for High‐resolution Clothed Human Reconstruction
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Pu, Juncheng; Liu, Li; Fu, Xiaodong; Su, Zhuo; Liu, Lijun; Peng, Wei; Hauser, Helwig and Alliez, Pierre
    Detailed and accurate feature representation is essential for high‐resolution reconstruction of clothed human. Herein we introduce a unified feature representation for clothed human reconstruction, which can adapt to changeable posture and various clothing details. The whole method can be divided into two parts: the human shape feature representation and the details feature representation. Specifically, we firstly combine the voxel feature learned from semantic voxel with the pixel feature from input image as an implicit representation for human shape. Then, the details feature mixed with the clothed layer feature and the normal feature is used to guide the multi‐layer perceptron to capture geometric surface details. The key difference from existing methods is that we use the clothing semantics to infer clothed layer information, and further restore the layer details with geometric height. We qualitative and quantitative experience results demonstrate that proposed method outperforms existing methods in terms of handling limb swing and clothing details. Our method provides a new solution for clothed human reconstruction with high‐resolution details (style, wrinkles and clothed layers), and has good potential in three‐dimensional virtual try‐on and digital characters.
  • Item
    3D Generative Model Latent Disentanglement via Local Eigenprojection
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Foti, Simone; Koo, Bongjin; Stoyanov, Danail; Clarkson, Matthew J.; Hauser, Helwig and Alliez, Pierre
    Designing realistic digital humans is extremely complex. Most data‐driven generative models used to simplify the creation of their underlying geometric shape do not offer control over the generation of local shape attributes. In this paper, we overcome this limitation by introducing a novel loss function grounded in spectral geometry and applicable to different neural‐network‐based generative models of 3D head and body meshes. Encouraging the latent variables of mesh variational autoencoders (VAEs) or generative adversarial networks (GANs) to follow the local eigenprojections of identity attributes, we improve latent disentanglement and properly decouple the attribute creation. Experimental results show that our local eigenprojection disentangled (LED) models not only offer improved disentanglement with respect to the state‐of‐the‐art, but also maintain good generation capabilities with training times comparable to the vanilla implementations of the models. Our code and pre‐trained models are available at .
  • Item
    Exploration of Player Behaviours from Broadcast Badminton Videos
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Chen, Wei‐Ting; Wu, Hsiang‐Yun; Shih, Yun‐An; Wang, Chih‐Chuan; Wang, Yu‐Shuen; Hauser, Helwig and Alliez, Pierre
    Understanding an opposing player's behaviours and weaknesses is often the key to winning a badminton game. This study presents a system to extract game data from broadcast badminton videos, and visualize the extracted data to help coaches and players develop effective tactics. Specifically, we apply state‐of‐the‐art machine learning methods to partition a broadcast video into segments, in which each video segment shows a badminton rally. Next, we detect players' feet in each video frame and transform the player positions into the court coordinate system. Finally, we detect hit frames in each rally, in which the shuttle moves towards the opposite directions. By visualizing the extracted data, our system conveys when and where players hit the shuttle in historical games. Since players tend to smash or drop shuttles under a specific location, we provide users with interactive tools to filter data and focus on the distributions conditioned by player positions. This strategy also reduces visual clutter. Besides, our system plots the shuttle hitting distributions side‐by‐side, enabling visual comparison and analysis of player behaviours under different conditions. The results and the use cases demonstrate the feasibility of our system.
  • Item
    Visually Abstracting Event Sequences as Double Trees Enriched with Category‐Based Comparison
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Krause, Cedric; Agarwal, Shivam; Burch, Michael; Beck, Fabian; Hauser, Helwig and Alliez, Pierre
    Event sequence visualization aids analysts in many domains to better understand and infer new insights from event data. Analysing behaviour before or after a certain event of interest is a common task in many scenarios. In this paper, we introduce, formally define, and position as a domain‐agnostic tree visualization approach for this task. The visualization shows the sequences that led to the event of interest as a tree on the left, and those that followed on the right. Moreover, our approach enables users to create selections based on event attributes to interactively compare the events and sequences along colour‐coded categories. We integrate the double tree and category‐based comparison into a user interface for event sequence analysis. In three application examples, we show a diverse set of scenarios, covering short and long time spans, non‐spatial and spatial events, human and artificial actors, to demonstrate the general applicability of the approach.
  • Item
    Adversarial Interactive Cartoon Sketch Colourization with Texture Constraint and Auxiliary Auto‐Encoder
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Liu, Xiaoyu; Zhu, Shaoqiang; Zeng, Yao; Zhang, Junsong; Hauser, Helwig and Alliez, Pierre
    Colouring cartoon sketches can help children develop their intellect and inspire their artistic creativity. Unlike photo colourization or anime line art colourization, cartoon sketch colourization is challenging due to the scarcity of texture information and the irregularity of the line structure, which is mainly reflected in the phenomenon of colour‐bleeding artifacts in generated images. We propose a colourization approach for cartoon sketches, which takes both sketches and colour hints as inputs to produce impressive images. To solve the problem of colour‐bleeding artifacts, we propose a multi‐discriminator colourization framework that introduces a texture discriminator in the conditional generative adversarial network (cGAN). Then we combined this framework with a pre‐trained auxiliary auto‐encoder, where an auxiliary feature loss is designed to further improve colour quality, and a condition input is introduced to increase the generalization ability over hand‐drawn sketches. We present both quantitative and qualitative evaluations, which prove the effectiveness of our proposed method. We test our method on sketches of varying complexity and structure, then build an interactive programme based on our model for user study. Experimental results demonstrate that the method generates natural and consistent colour images in real time from sketches drawn by non‐professionals.
  • Item
    Immersive Free‐Viewpoint Panorama Rendering from Omnidirectional Stereo Video
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Mühlhausen, Moritz; Kappel, Moritz; Kassubeck, Marc; Wöhler, Leslie; Grogorick, Steve; Castillo, Susana; Eisemann, Martin; Magnor, Marcus; Hauser, Helwig and Alliez, Pierre
    In this paper, we tackle the challenging problem of rendering real‐world 360° panorama videos that support full 6 degrees‐of‐freedom (DoF) head motion from a prerecorded omnidirectional stereo (ODS) video. In contrast to recent approaches that create novel views for individual panorama frames, we introduce a video‐specific temporally‐consistent multi‐sphere image (MSI) scene representation. Given a conventional ODS video, we first extract information by estimating framewise descriptive feature maps. Then, we optimize the global MSI model using theory from recent research on neural radiance fields. Instead of a continuous scene function, this multi‐sphere image (MSI) representation depicts colour and density information only for a discrete set of concentric spheres. To further improve the temporal consistency of our results, we apply an ancillary refinement step which optimizes the temporal coherency between successive video frames. Direct comparisons to recent baseline approaches show that our global MSI optimization yields superior performance in terms of visual quality. Our code and data will be made publicly available.
  • Item
    Efficient Hardware Acceleration of Robust Volumetric Light Transport Simulation
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Moonen, Nol; Jalba, Andrei C.; Hauser, Helwig and Alliez, Pierre
    Efficiently simulating the full range of light effects in arbitrary input scenes that contain participating media is a difficult task. Unified points, beams and paths (UPBP) is an algorithm capable of capturing a wide range of media effects, by combining bidirectional path tracing (BPT) and photon density estimation (PDE) with multiple importance sampling (MIS). A computationally expensive task of UPBP is the MIS weight computation, performed each time a light path is formed. We derive an efficient algorithm to compute the MIS weights for UPBP, which improves over previous work by eliminating the need to iterate over the path vertices. We achieve this by maintaining recursive quantities as subpaths are generated, from which the subpath weights can be computed. In this way, the full path weight can be computed by only using the data cached at the two vertices at the ends of the subpaths. Furthermore, a costly part of PDE is the search for nearby photon points and beams. Previous work has shown that a spatial data structure for photon mapping can be implemented using the hardware‐accelerated bounding volume hierarchy of NVIDIA's RTX GPUs. We show that the same technique can be applied to different types of volumetric PDE and compare the performance of these data structures with the state of the art. Finally, using our new algorithm and data structures we fully implement UPBP on the GPU which we, to the best of our knowledge, are the first to do so.
  • Item
    Garment Model Extraction from Clothed Mannequin Scan
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Gao, Qiqi; Taketomi, Takafumi; Hauser, Helwig and Alliez, Pierre
    Modelling garments with rich details require enormous time and expertise of artists. Recent works re‐construct garments through segmentation of clothed human scan. However, existing methods rely on certain human body templates and do not perform as well on loose garments such as skirts. This paper presents a two‐stage pipeline for extracting high‐fidelity garments from static scan data of clothed mannequins. Our key contribution is a novel method for tracking both tight and loose boundaries between garments and mannequin skin. Our algorithm enables the modelling of off‐the‐shelf clothing with fine details. It is independent of human template models and requires only minimal mannequin priors. The effectiveness of our method is validated through quantitative and qualitative comparison with the baseline method. The results demonstrate that our method can accurately extract both tight and loose garments within reasonable time.
  • Item
    A Survey of Personalized Interior Design
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Wang, Y.T.; Liang, C.; Huai, N.; Chen, J.; Zhang, C.J.; Hauser, Helwig and Alliez, Pierre
    Interior design is the core step of interior decoration, and it determines the overall layout and style of furniture. Traditional interior design is usually laborious and time‐consuming work carried out by professional designers and cannot always meet clients' personalized requirements. With the development of computer graphics, computer vision and machine learning, computer scientists have carried out much fruitful research work in computer‐aided personalized interior design (PID). In general, personalization research in interior design mainly focuses on furniture selection and floor plan preparation. In terms of the former, personalized furniture selection is achieved by selecting furniture that matches the resident's preference and style, while the latter allows the resident to personalize their floor plan design and planning. Finally, the automatic furniture layout task generates a stylistically matched and functionally complete furniture layout result based on the selected furniture and prepared floor plan. Therefore, the main challenge for PID is meeting residents' personalized requirements in terms of both furniture and floor plans. This paper answers the above question by reviewing recent progress in five separate but correlated areas, including furniture style analysis, furniture compatibility prediction, floor plan design, floor plan analysis and automatic furniture layout. For each topic, we review representative methods and compare and discuss their strengths and shortcomings. In addition, we collect and summarize public datasets related to PID and finally discuss its future research directions.
  • Item
    Model‐based Crowd Behaviours in Human‐solution Space
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Xiang, Wei; Wang, He; Zhang, Yuqing; Yip, Milo K.; Jin, Xiaogang; Hauser, Helwig and Alliez, Pierre
    Realistic crowd simulation has been pursued for decades, but it still necessitates tedious human labour and a lot of trial and error. The majority of currently used crowd modelling is either empirical (model‐based) or data‐driven (model‐free). Model‐based methods cannot fit observed data precisely, whereas model‐free methods are limited by the availability/quality of data and are uninterpretable. In this paper, we aim at taking advantage of both model‐based and data‐driven approaches. In order to accomplish this, we propose a new simulation framework built on a physics‐based model that is designed to be data‐friendly. Both the general prior knowledge about crowds encoded by the physics‐based model and the specific real‐world crowd data at hand jointly influence the system dynamics. With a multi‐granularity physics‐based model, the framework combines microscopic and macroscopic motion control. Each simulation step is formulated as an energy optimization problem, where the minimizer is the desired crowd behaviour. In contrast to traditional optimization‐based methods which seek the theoretical minimizer, we designed an acceleration‐aware data‐driven scheme to compute the minimizer from real‐world data in order to achieve higher realism by parameterizing both velocity and acceleration. Experiments demonstrate that our method can produce crowd animations that are more realistically behaved in a variety of scales and scenarios when compared to the earlier methods.
  • Item
    MesoGAN: Generative Neural Reflectance Shells
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Diolatzis, Stavros; Novak, Jan; Rousselle, Fabrice; Granskog, Jonathan; Aittala, Miika; Ramamoorthi, Ravi; Drettakis, George; Hauser, Helwig and Alliez, Pierre
    We introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering. The primitive can be applied to surfaces as a neural reflectance shell; a thin volumetric layer above the surface with appearance parameters defined by a neural network. To construct the neural shell, we first generate a 2D feature texture using StyleGAN with carefully randomized Fourier features to support arbitrarily sized textures without repeating artefacts. We augment the 2D feature texture with a learned height feature, which aids the neural field renderer in producing volumetric parameters from the 2D texture. To facilitate filtering, and to enable end‐to‐end training within memory constraints of current hardware, we utilize a hierarchical texturing approach and train our model on multi‐scale synthetic datasets of 3D mesoscale structures. We propose one possible approach for conditioning MesoGAN on artistic parameters (e.g. fibre length, density of strands, lighting direction) and demonstrate and discuss integration into physically based renderers.
  • Item
    It's about Time: Analytical Time Periodization
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Andrienko, Natalia; Andrienko, Gennady; Hauser, Helwig and Alliez, Pierre
    This paper presents a novel approach to the problem of time periodization, which involves dividing the time span of a complex dynamic phenomenon into periods that enclose different relatively stable states or development trends. The challenge lies in finding such a division of the time that takes into account diverse behaviours of multiple components of the phenomenon while being simple and easy to interpret. Despite the importance of this problem, it has not received sufficient attention in the fields of visual analytics and data science. We use a real‐world example from aviation and an additional usage scenario on analysing mobility trends during the COVID‐19 pandemic to develop and test an analytical workflow that combines computational and interactive visual techniques. We highlight the differences between the two cases and show how they affect the use of different techniques. Through our investigation of possible variations in the time periodization problem, we discuss the potential of our approach to be used in various applications. Our contributions include defining and investigating an earlier neglected problem type, developing a practical and reproducible approach to solving problems of this type, and uncovering potential for formalization and development of computational methods.
  • Item
    EvIcon: Designing High‐Usability Icon with Human‐in‐the‐loop Exploration and IconCLIP
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Shen, I‐Chao; Cherng, Fu‐Yin; Igarashi, Takeo; Lin, Wen‐Chieh; Chen, Bing‐Yu; Hauser, Helwig and Alliez, Pierre
    Interface icons are prevalent in various digital applications. Due to limited time and budgets, many designers rely on informal evaluation, which often results in poor usability icons. In this paper, we propose a unique human‐in‐the‐loop framework that allows our target users, that is novice and professional user interface (UI) designers, to improve the usability of interface icons efficiently. We formulate several usability criteria into a perceptual usability function and enable users to iteratively revise an icon set with an interactive design tool, EvIcon. We take a large‐scale pre‐trained joint image‐text embedding (CLIP) and fine‐tune it to embed icon visuals with icon tags in the same embedding space (IconCLIP). During the revision process, our design tool provides two types of instant perceptual usability feedback. First, we provide perceptual usability feedback modelled by deep learning models trained on IconCLIP embeddings and crowdsourced perceptual ratings. Second, we use the embedding space of IconCLIP to assist users in improving icons' visual distinguishability among icons within the user‐prepared icon set. To provide the perceptual prediction, we compiled , the first large‐scale dataset of perceptual usability ratings over 10,000 interface icons, by conducting a crowdsourcing study. We demonstrated that our framework could benefit UI designers' interface icon revision process with a wide range of professional experience. Moreover, the interface icons designed using our framework achieved better semantic distance and familiarity, verified by an additional online user study.
  • Item
    Distributed Poisson Surface Reconstruction
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Kazhdan, M.; Hoppe, H.; Hauser, Helwig and Alliez, Pierre
    Screened Poisson surface reconstruction robustly creates meshes from oriented point sets. For large datasets, the technique requires hours of computation and significant memory. We present a method to parallelize and distribute this computation over multiple commodity client nodes. The method partitions space on one axis into adaptively sized slabs containing balanced subsets of points. Because the Poisson formulation involves a global system, the challenge is to maintain seamless consistency at the slab boundaries and obtain a reconstruction that is indistinguishable from the serial result. To this end, we express the reconstructed indicator function as a sum of a low‐resolution term computed on a server and high‐resolution terms computed on distributed clients. Using a client–server architecture, we map the computation onto a sequence of serial server tasks and parallel client tasks, separated by synchronization barriers. This architecture also enables low‐memory evaluation on a single computer, albeit without speedup. We demonstrate a 700 million vertex reconstruction of the billion point David statue scan in less than 20 min on a 65‐node cluster with a maximum memory usage of 45 GB/node, or in 14 h on a single node.
  • Item
    Harmonized Portrait‐Background Image Composition
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Wang, Yijiang; Li, Yuqi; Wang, Chong; Ye, Xulun; Hauser, Helwig and Alliez, Pierre
    Portrait‐background image composition is a widely used operation in selfie editing, video meeting, and other portrait applications. To guarantee the realism of the composited images, the appearance of the foreground portraits needs to be adjusted to fit the new background images. Existing image harmonization approaches are proposed to handle general foreground objects, thus lack the special ability to adjust portrait foregrounds. In this paper, we present a novel end‐to‐end network architecture to learn both the content features and style features for portrait‐background composition. The method adjusts the appearance of portraits to make them compatible with backgrounds, while the generation of the composited images satisfies the prior of a style‐based generator. We also propose a pipeline to generate high‐quality and high‐variety synthesized image datasets for training and evaluation. The proposed method outperforms other state‐of‐the‐art methods both on the synthesized dataset and the real composited images and shows robust performance in video applications.
  • Item
    Recurrent Motion Refiner for Locomotion Stitching
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Kim, Haemin; Cho, Kyungmin; Hong, Seokhyeon; Noh, Junyong; Hauser, Helwig and Alliez, Pierre
    Stitching different character motions is one of the most commonly used techniques as it allows the user to make new animations that fit one's purpose from pieces of motion. However, current motion stitching methods often produce unnatural motion with foot sliding artefacts, depending on the performance of the interpolation. In this paper, we propose a novel motion stitching technique based on a recurrent motion refiner (RMR) that connects discontinuous locomotions into a single natural locomotion. Our model receives different locomotions as input, in which the root of the last pose of the previous motion and that of the first pose of the next motion are aligned. During runtime, the model slides through the sequence, editing frames window by window to output a smoothly connected animation. Our model consists of a two‐layer recurrent network that comes between a simple encoder and decoder. To train this network, we created a sufficient number of paired data with a newly designed data generation. This process employs a K‐nearest neighbour search that explores a predefined motion database to create the corresponding input to the ground truth. Once trained, the suggested model can connect various lengths of locomotion sequences into a single natural locomotion.
  • Item
    Episodes and Topics in Multivariate Temporal Data
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Andrienko, Natalia; Andrienko, Gennady; Shirato, Gota; Hauser, Helwig and Alliez, Pierre
    The term ‘episode’ refers to a time interval in the development of a dynamic process or behaviour of an entity. Episode‐based data consist of a set of episodes that are described using time series of multiple attribute values. Our research problem involves analysing episode‐based data in order to understand the distribution of multi‐attribute dynamic characteristics across a set of episodes. To solve this problem, we applied an existing theoretical model and developed a general approach that involves incrementally increasing data abstraction. We instantiated this general approach in an analysis procedure in which the value variation of each attribute within an episode is represented by a combination of symbols treated as a ‘word’. The variation of multiple attributes is thus represented by a combination of ‘words’ treated as a ‘text’. In this way, the the set of episodes is transformed to a collection of text documents. Topic modelling techniques applied to this collection find groups of related (i.e. repeatedly co‐occurring) ‘words’, which are called ‘topics’. Given that the ‘words’ encode variation patterns of individual attributes, the ‘topics’ represent patterns of joint variation of multiple attributes. In the following steps, analysts interpret the topics and examine their distribution across all episodes using interactive visualizations. We test the effectiveness of the procedure by applying it to two types of episode‐based data with distinct properties and introduce a range of generic and data type‐specific visualization techniques that can support the interpretation and exploration of topic distribution.
  • Item
    OaIF: Occlusion‐Aware Implicit Function for Clothed Human Re‐construction
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Tan, Yudi; Guan, Boliang; Zhou, Fan; Su, Zhuo; Hauser, Helwig and Alliez, Pierre
    Clothed human re‐construction from a monocular image is challenging due to occlusion, depth‐ambiguity and variations of body poses. Recently, shape representation based on an implicit function, compared to explicit representation such as mesh and voxel, is more capable with complex topology of clothed human. This is mainly achieved by using pixel‐aligned features, facilitating implicit function to capture local details. But such methods utilize an identical feature map for all sampled points to get local features, making their models occlusion‐agnostic in the encoding stage. The decoder, as implicit function, only maps features and does not take occlusion into account explicitly. Thus, these methods fail to generalize well in poses with severe self‐occlusion. To address this, we present OaIF to encode local features conditioned in visibility of SMPL vertices. OaIF projects SMPL vertices onto image plane to obtain image features masked by visibility. Vertices features integrated with geometry information of mesh are then feed into a GAT network to encode jointly. We query hybrid features and occlusion factors for points through cross attention and learn occupancy fields for clothed human. The experiments demonstrate that OaIF achieves more robust and accurate re‐construction than the state of the art on both public datasets and wild images.
  • Item
    Reference‐based Screentone Transfer via Pattern Correspondence and Regularization
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Li, Zhansheng; Zhao, Nanxuan; Wu, Zongwei; Dai, Yihua; Wang, Junle; Jing, Yanqing; He, Shengfeng; Hauser, Helwig and Alliez, Pierre
    Adding screentone to initial line drawings is a crucial step for manga generation, but is a tedious and human‐laborious task. In this work, we propose a novel data‐driven method aiming to transfer the screentone pattern from a reference manga image. This not only ensures the quality, but also adds controllability to the generated manga results. The reference‐based screentone translation task imposes several unique challenges. Since manga image often contains multiple screentone patterns interweaved with line drawing, as an abstract art, this makes it even more difficult to extract disentangled style code from the reference. Also, finding correspondence for mapping between the reference and the input line drawing without any screentone is hard. As screentone contains many subtle details, how to guarantee the style consistency to the reference remains challenging. To suit our purpose and resolve the above difficulties, we propose a novel Reference‐based Screentone Transfer Network (RSTN). We encode the screentone style through a 1D stylegram. A patch correspondence loss is designed to build a similarity mapping function for guiding the translation. To mitigate the generated artefacts, a pattern regularization loss is introduced in the patch‐level. Through extensive experiments and a user study, we have demonstrated the effectiveness of our proposed model.
  • Item
    Numerical Coarsening with Neural Shape Functions
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Ni, Ning; Xu, Qingyu; Li, Zhehao; Fu, Xiao‐Ming; Liu, Ligang; Hauser, Helwig and Alliez, Pierre
    We propose to use nonlinear shape functions represented as neural networks in numerical coarsening to achieve generalization capability as well as good accuracy. To overcome the challenge of generalization to different simulation scenarios, especially nonlinear materials under large deformations, our key idea is to replace the linear mapping between coarse and fine meshes adopted in previous works with a nonlinear one represented by neural networks. However, directly applying an end‐to‐end neural representation leads to poor performance due to over‐huge parameter space as well as failing to capture some intrinsic geometry properties of shape functions. Our solution is to embed geometry constraints as the prior knowledge in learning, which greatly improves training efficiency and inference robustness. With the trained neural shape functions, we can easily adopt numerical coarsening in the simulation of various hyperelastic models without any other preprocessing step required. The experiment results demonstrate the efficiency and generalization capability of our method over previous works.
  • Item
    A Semi‐Procedural Convolutional Material Prior
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Zhou, Xilong; Hašan, Miloš; Deschaintre, Valentin; Guerrero, Paul; Sunkavalli, Kalyan; Kalantari, Nima Khademi; Hauser, Helwig and Alliez, Pierre
    Lightweight material capture methods require a material prior, defining the subspace of plausible textures within the large space of unconstrained texel grids. Previous work has either used deep neural networks (trained on large synthetic material datasets) or procedural node graphs (constructed by expert artists) as such priors. In this paper, we propose a semi‐procedural differentiable material prior that represents materials as a set of (typically procedural) grayscale noises and patterns that are processed by a sequence of lightweight learnable convolutional filter operations. We demonstrate that the restricted structure of this architecture acts as an inductive bias on the space of material appearances, allowing us to optimize the weights of the convolutions per‐material, with no need for pre‐training on a large dataset. Combined with a differentiable rendering step and a perceptual loss, we enable single‐image tileable material capture comparable with state of the art. Our approach does not target the pixel‐perfect recovery of the material, but rather uses noises and patterns as input to match the target appearance. To achieve this, it does not require complex procedural graphs, and has a much lower complexity, computational cost and storage cost. We also enable control over the results, through changing the provided patterns and using guide maps to push the material properties towards a user‐driven objective.
  • Item
    Two‐Step Training: Adjustable Sketch Colourization via Reference Image and Text Tag
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Yan, Dingkun; Ito, Ryogo; Moriai, Ryo; Saito, Suguru; Hauser, Helwig and Alliez, Pierre
    Automatic sketch colourization is a highly interestinged topic in the image‐generation field. However, due to the absence of texture in sketch images and the lack of training data, existing reference‐based methods are ineffective in generating visually pleasant results and cannot edit the colours using text tags. Thus, this paper presents a conditional generative adversarial network (cGAN)‐based architecture with a pre‐trained convolutional neural network (CNN), reference‐based channel‐wise attention (RBCA) and self‐adaptive multi‐layer perceptron (MLP) to tackle this problem. We propose two‐step training and spatial latent manipulation to achieve high‐quality and colour‐adjustable results using reference images and text tags. The superiority of our approach in reference‐based colourization is demonstrated through qualitative/quantitative comparisons and user studies with existing network‐based methods. We also validate the controllability of the proposed model and discuss the details of our latent manipulation on the basis of experimental results of multi‐label manipulation.
  • Item
    ROI Scissor: Interactive Segmentation of Feature Region of Interest in a Triangular Mesh
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Moon, Ji‐Hye; Ha, Yujin; Park, Sanghun; Kim, Myung‐Soo; Yoon, Seung‐Hyun; Hauser, Helwig and Alliez, Pierre
    We present a simple and effective method for the interactive segmentation of feature regions in a triangular mesh. From the user‐specified radius and click position, the candidate region that contains the desired feature region is defined as geodesic disc on a triangle mesh. A concavity‐aware harmonic field is then computed on the candidate region using the appropriate boundary constraints. An initial isoline is chosen by evaluating the uniformly sampled ones on the harmonic field based on the gradient magnitude. A set of feature points on the initial isoline is selected and the anisotropic geodesics passing through them are then determined as the final segmentation boundary, which is smooth and locally shortest. The experimental results show several segmentation results for various 3D models, revealing the effectiveness of the proposed method.
  • Item
    Accompany Children's Learning for You: An Intelligent Companion Learning System
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Qian, Jiankai; Jiang, Xinbo; Ma, Jiayao; Li, Jiachen; Gao, Zhenzhen; Qin, Xueying; Hauser, Helwig and Alliez, Pierre
    Nowadays, parents attach importance to their children's primary education but often lack time and correct pedagogical principles to accompany their children's learning. Besides, existing learning systems cannot perceive children's emotional changes. They may also cause children's self‐control and cognitive problems due to smart devices such as mobile phones and tablets. To tackle these issues, we propose an intelligent companion learning system to accompany children in learning English words, namely the . The IARE realizes the perception and feedback of children's engagement through the intelligent agent (IA) module, and presents the humanized interaction based on projective Augmented Reality (AR). Specifically, IA perceives the children's learning engagement change and spelling status in real‐time through our online lightweight temporal multiple instance attention module and character recognition module, based on which analyses the performance of the individual learning process and gives appropriate feedback and guidance. We allow children to interact with physical letters, thus avoiding the excessive interference of electronic devices. To test the efficacy of our system, we conduct a pilot study with 14 English learning children. The results show that our system can significantly improve children's intrinsic motivation and self‐efficacy.
  • Item
    State of the Art of Molecular Visualization in Immersive Virtual Environments
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Kuťák, David; Vázquez, Pere‐Pau; Isenberg, Tobias; Krone, Michael; Baaden, Marc; Byška, Jan; Kozlíková, Barbora; Miao, Haichao; Hauser, Helwig and Alliez, Pierre
    Visualization plays a crucial role in molecular and structural biology. It has been successfully applied to a variety of tasks, including structural analysis and interactive drug design. While some of the challenges in this area can be overcome with more advanced visualization and interaction techniques, others are challenging primarily due to the limitations of the hardware devices used to interact with the visualized content. Consequently, visualization researchers are increasingly trying to take advantage of new technologies to facilitate the work of domain scientists. Some typical problems associated with classic 2D interfaces, such as regular desktop computers, are a lack of natural spatial understanding and interaction, and a limited field of view. These problems could be solved by immersive virtual environments and corresponding hardware, such as virtual reality head‐mounted displays. Thus, researchers are investigating the potential of immersive virtual environments in the field of molecular visualization. There is already a body of work ranging from educational approaches to protein visualization to applications for collaborative drug design. This review focuses on molecular visualization in immersive virtual environments as a whole, aiming to cover this area comprehensively. We divide the existing papers into different groups based on their application areas, and types of tasks performed. Furthermore, we also include a list of available software tools. We conclude the report with a discussion of potential future research on molecular visualization in immersive environments.
  • Item
    Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Méndez, J.; Alrabbaa, C.; Koopmann, P.; Langner, R.; Baader, F.; Dachselt, R.; Hauser, Helwig and Alliez, Pierre
    OWL is a powerful language to formalize terminologies in an ontology. Its main strength lies in its foundation on description logics, allowing systems to automatically deduce implicit information through logical reasoning. However, since ontologies are often complex, understanding the outcome of the reasoning process is not always straightforward. Unlike already existing tools for exploring ontologies, our visualization tool is tailored towards explaining logical consequences. In addition, it supports the debugging of unwanted consequences and allows for an interactive comparison of the impact of removing statements from the ontology. Our visual approach combines (1) specialized views for the explanation of logical consequences and the structure of the ontology, (2) employing multiple layout modes for iteratively exploring explanations, (3) detailed explanations of specific reasoning steps, (4) cross‐view highlighting and colour coding of the visualization components, (5) features for dealing with visual complexity and (6) comparison and exploration of possible fixes to the ontology. We evaluated in a qualitative study with 16 experts in logics, and their positive feedback confirms the value of our concepts for explaining reasoning and debugging ontologies.
  • Item
    Visual Parameter Space Exploration in Time and Space
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Piccolotto, Nikolaus; Bögl, Markus; Miksch, Silvia; Hauser, Helwig and Alliez, Pierre
    Computational models, such as simulations, are central to a wide range of fields in science and industry. Those models take input parameters and produce some output. To fully exploit their utility, relations between parameters and outputs must be understood. These include, for example, which parameter setting produces the best result (optimization) or which ranges of parameter settings produce a wide variety of results (sensitivity). Such tasks are often difficult to achieve for various reasons, for example, the size of the parameter space, and supported with visual analytics. In this paper, we survey visual parameter space exploration (VPSE) systems involving spatial and temporal data. We focus on interactive visualizations and user interfaces. Through thematic analysis of the surveyed papers, we identify common workflow steps and approaches to support them. We also identify topics for future work that will help enable VPSE on a greater variety of computational models.
  • Item
    Faster Edge‐Path Bundling through Graph Spanners
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Wallinger, Markus; Archambault, Daniel; Auber, David; Nöllenburg, Martin; Peltonen, Jaakko; Hauser, Helwig and Alliez, Pierre
    Edge‐Path bundling is a recent edge bundling approach that does not incur ambiguities caused by bundling disconnected edges together. Although the approach produces less ambiguous bundlings, it suffers from high computational cost. In this paper, we present a new Edge‐Path bundling approach that increases the computational speed of the algorithm without reducing the quality of the bundling. First, we demonstrate that biconnected components can be processed separately in an Edge‐Path bundling of a graph without changing the result. Then, we present a new edge bundling algorithm that is based on observing and exploiting a strong relationship between Edge‐Path bundling and graph spanners. Although the worst case complexity of the approach is the same as of the original Edge‐Path bundling algorithm, we conduct experiments to demonstrate that the new approach is – times faster than Edge‐Path bundling depending on the dataset, which brings its practical running time more in line with traditional edge bundling algorithms.
  • Item
    Triangle Influence Supersets for Fast Distance Computation
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Pujol, Eduard; Chica, Antonio; Hauser, Helwig and Alliez, Pierre
    We present an acceleration structure to efficiently query the Signed Distance Field (SDF) of volumes represented by triangle meshes. The method is based on a discretization of space. In each node, we store the triangles defining the SDF behaviour in that region. Consequently, we reduce the cost of the nearest triangle search, prioritizing query performance, while avoiding approximations of the field. We propose a method to conservatively compute the set of triangles influencing each node. Given a node, each triangle defines a region of space such that all points inside it are closer to a point in the node than the triangle is. This property is used to build the SDF acceleration structure. We do not need to explicitly compute these regions, which is crucial to the performance of our approach. We prove the correctness of the proposed method and compare it to similar approaches, confirming that our method produces faster query times than other exact methods.
  • Item
    A Characterization of Interactive Visual Data Stories With a Spatio‐Temporal Context
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Mayer, Benedikt; Steinhauer, Nastasja; Preim, Bernhard; Meuschke, Monique; Hauser, Helwig and Alliez, Pierre
    Large‐scale issues with a spatial and temporal context such as the COVID‐19 pandemic, the war against Ukraine, and climate change have given visual storytelling with data a lot of attention in online journalism, confirming its high effectiveness and relevance for conveying stories. Thus, new ways have emerged that expand the space of visual storytelling techniques. However, interactive visual data stories with a spatio‐temporal context have not been extensively studied yet. Particularly quantitative information about the used layout and media, the visual storytelling techniques, and the visual encoding of space‐time is relevant to get a deeper understanding of how such stories are commonly built to convey complex information in a comprehensible way. Covering these three aspects, we propose a design space derived by merging and adjusting existing approaches, which we used to categorize 130 collected web‐based visual data stories with a spatio‐temporal context from between 2018 and 2022. An analyzis of the collected data reveals the power of large‐scale issues to shape the landscape of storytelling techniques and a trend towards a simplified consumability of stories. Taken together, our findings can serve story authors as inspiration regarding which storytelling techniques to include in their own spatio‐temporal data stories.
  • Item
    Deep Learning for Scene Flow Estimation on Point Clouds: A Survey and Prospective Trends
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Li, Zhiqi; Xiang, Nan; Chen, Honghua; Zhang, Jianjun; Yang, Xiaosong; Hauser, Helwig and Alliez, Pierre
    Aiming at obtaining structural information and 3D motion of dynamic scenes, scene flow estimation has been an interest of research in computer vision and computer graphics for a long time. It is also a fundamental task for various applications such as autonomous driving. Compared to previous methods that utilize image representations, many recent researches build upon the power of deep analysis and focus on point clouds representation to conduct 3D flow estimation. This paper comprehensively reviews the pioneering literature in scene flow estimation based on point clouds. Meanwhile, it delves into detail in learning paradigms and presents insightful comparisons between the state‐of‐the‐art methods using deep learning for scene flow estimation. Furthermore, this paper investigates various higher‐level scene understanding tasks, including object tracking, motion segmentation, etc. and concludes with an overview of foreseeable research trends for scene flow estimation.
  • Item
    Are We There Yet? A Roadmap of Network Visualization from Surveys to Task Taxonomies
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Filipov, Velitchko; Arleo, Alessio; Miksch, Silvia; Hauser, Helwig and Alliez, Pierre
    Networks are abstract and ubiquitous data structures, defined as a set of data points and relationships between them. Network visualization provides meaningful representations of these data, supporting researchers in understanding the connections, gathering insights, and detecting and identifying unexpected patterns. Research in this field is focusing on increasingly challenging problems, such as visualizing dynamic, complex, multivariate, and geospatial networked data. This ever‐growing, and widely varied, body of research led to several surveys being published, each covering one or more disciplines of network visualization. Despite this effort, the variety and complexity of this research represents an obstacle when surveying the domain and building a comprehensive overview of the literature. Furthermore, there exists a lack of clarification and uniformity between the terminology used in each of the surveys, which requires further effort when mapping and categorizing the plethora of different visualization techniques and approaches. In this paper, we aim at providing researchers and practitioners alike with a “roadmap” detailing the current research trends in the field of network visualization. We design our contribution as a meta‐survey where we discuss, summarize, and categorize recent surveys and task taxonomies published in the context of network visualization. We identify more and less saturated disciplines of research and consolidate the terminology used in the surveyed literature. We also survey the available task taxonomies, providing a comprehensive analysis of their varying support to each network visualization discipline and by establishing and discussing a classification for the individual tasks. With this combined analysis of surveys and task taxonomies, we provide an overarching structure of the field, from which we extrapolate the current state of research and promising directions for future work.
  • Item
    Smooth Transitions Between Parallel Coordinates and Scatter Plots via Polycurve Star Plots
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Kiesel, Dora; Riehmann, Patrick; Froehlich, Bernd; Hauser, Helwig and Alliez, Pierre
    This paper presents new techniques for seamlessly transitioning between parallel coordinate plots, star plots, and scatter plots. The star plot serves as a mediator visualization between parallel coordinate plots and scatter plots since it uses lines to represent data items as parallel coordinates do and can arrange axes orthogonally as used for scatter plots. The design of the transitions also motivated a new variant of the star plot, the polycurve star plot, that uses curved lines instead of straight ones and has advantages both in terms of space utilization and the detection of clusters. Furthermore, we developed a geometrically motivated method to embed scatter points from a scatter plot into star plots and parallel coordinate plots to track the transition of structural information such as clusters and correlations between the different plot types. The integration of our techniques into an interactive analysis tool for exploring multivariate data demonstrates the advantages and utility of our approach over a multi‐view approach for scatter plots and parallel coordinate plots, which we confirmed in a user study and concrete usage scenarios.
  • Item
    Multilevel Robustness for 2D Vector Field Feature Tracking, Selection and Comparison
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Yan, Lin; Ullrich, Paul Aaron; Van Roekel, Luke P.; Wang, Bei; Guo, Hanqi; Hauser, Helwig and Alliez, Pierre
    Critical point tracking is a core topic in scientific visualization for understanding the dynamic behaviour of time‐varying vector field data. The topological notion of robustness has been introduced recently to quantify the structural stability of critical points, that is, the robustness of a critical point is the minimum amount of perturbation to the vector field necessary to cancel it. A theoretical basis has been established previously that relates critical point tracking with the notion of robustness, in particular, critical points could be tracked based on their closeness in stability, measured by robustness, instead of just distance proximity within the domain. However, in practice, the computation of classic robustness may produce artifacts when a critical point is close to the boundary of the domain; thus, we do not have a complete picture of the vector field behaviour within its local neighbourhood. To alleviate these issues, we introduce a multilevel robustness framework for the study of 2D time‐varying vector fields. We compute the robustness of critical points across varying neighbourhoods to capture the multiscale nature of the data and to mitigate the boundary effect suffered by the classic robustness computation. We demonstrate via experiments that such a new notion of robustness can be combined seamlessly with existing feature tracking algorithms to improve the visual interpretability of vector fields in terms of feature tracking, selection and comparison for large‐scale scientific simulations. We observe, for the first time, that the minimum multilevel robustness is highly correlated with physical quantities used by domain scientists in studying a real‐world tropical cyclone dataset. Such an observation helps to increase the physical interpretability of robustness.
  • Item
    iFUNDit: Visual Profiling of Fund Investment Styles
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Zhang, R.; Ku, B. K.; Wang, Y.; Yue, X.; Liu, S.; Li, K.; Qu, H.; Hauser, Helwig and Alliez, Pierre
    Mutual funds are becoming increasingly popular with the emergence of Internet finance. Clear profiling of a fund's investment style is crucial for fund managers to evaluate their investment strategies, and for investors to understand their investment. However, it is challenging to profile a fund's investment style as it requires a comprehensive analysis of complex multi‐dimensional temporal data. In addition, different fund managers and investors have different focuses when analysing a fund's investment style. To address the issue, we propose , an interactive visual analytic system for fund investment style analysis. The system decomposes a fund's critical features into performance attributes and investment style factors, and visualizes them in a set of coupled views: a fund and manager view, to delineate the distribution of funds' and managers' critical attributes on the market; a cluster view, to show the similarity of investment styles between different funds; and a detail view, to analyse the evolution of fund investment style. The system provides a holistic overview of fund data and facilitates a streamlined analysis of investment style at both the fund and the manager level. The effectiveness and usability of the system are demonstrated through domain expert interviews and case studies by using a real mutual fund dataset.
  • Item
    ARAP Revisited Discretizing the Elastic Energy using Intrinsic Voronoi Cells
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Finnendahl, Ugo; Schwartz, Matthias; Alexa, Marc; Hauser, Helwig and Alliez, Pierre
    As‐rigid‐as‐possible (ARAP) surface modelling is widely used for interactive deformation of triangle meshes. We show that ARAP can be interpreted as minimizing a discretization of an elastic energy based on non‐conforming elements defined over dual orthogonal cells of the mesh. Using the Voronoi cells rather than an orthogonal dual of the extrinsic mesh guarantees that the energy is non‐negative over each cell. We represent the intrinsic Delaunay edges extrinsically as polylines over the mesh, encoded in barycentric coordinates relative to the mesh vertices. This modification of the original ARAP energy, which we term , remedies problems stemming from non‐Delaunay edges in the original approach. Unlike the spokes‐and‐rims version of the ARAP approach it is less susceptible to the triangulation of the surface. We provide examples of deformations generated with iARAP and contrast them with other versions of ARAP. We also discuss the properties of the Laplace‐Beltrami operator implicitly introduced with the new discretization.
  • Item
    Corrigendum to “Making Procedural Water Waves Boundary‐aware”, “Primal/Dual Descent Methods for Dynamics”, and “Detailed Rigid Body Simulation with Extended Position Based Dynamics”
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Hauser, Helwig and Alliez, Pierre
  • Item
    Issue Information
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Hauser, Helwig and Alliez, Pierre