44-Issue 1

Permanent URI for this collection


Lightweight Voronoi Sponza


Issue Information


Editorial

Alliez, Pierre
Wimmer, Michael
Westermann, Rüdiger
Original Article

A Generative Adversarial Network for Upsampling of Direct Volume Rendering Images

Jin, Ge
Jung, Younhyun
Fulham, Michael
Feng, Dagan
Kim, Jinman
Original Article

BI‐LAVA: Biocuration With Hierarchical Image Labelling Through Active Learning and Visual Analytics

Trelles, Juan
Wentzel, Andrew
Berrios, William
Shatkay, Hagit
Marai, G. Elisabeta
Original Article

Dynamic Voxel‐Based Global Illumination

Cosin Ayerbe, Alejandro
Poulin, Pierre
Patow, Gustavo
Original Article

Automatic Inbetweening for Stroke‐Based Painterly Animation

Barroso, Nicolas
Fondevilla, Amélie
Vanderhaeghe, David
Original Article

Efficient Environment Map Rendering Based on Decomposition

Wu, Yu‐Ting
Original Article

Deep‐Learning‐Based Facial Retargeting Using Local Patches

Choi, Yeonsoo
Lee, Inyup
Cha, Sihun
Kim, Seonghyeon
Jung, Sunjin
Noh, Junyong
Original Article

Stress‐Aligned Hexahedral Lattice Structures

Bukenberger, D. R.
Wang, J.
Wu, J.
Westermann, R.
Original Article

Detecting, Interpreting and Modifying the Heterogeneous Causal Network in Multi‐Source Event Sequences

Xu, Shaobin
Sun, Minghui
Original Article

Light Distribution Models for Tree Growth Simulation

Nauber, Tristan
Mäder, Patrick
Original Article

Mesh Simplification for Unfolding*

Bhargava, M.
Schreck, C.
Freire, M.
Hugron, P. A.
Lefebvre, S.
Sellán, S.
Bickel, B.
Original Article

Conditional Font Generation With Content Pre‐Train and Style Filter

Hong, Yang
Li, Yinfei
Qiao, Xiaojun
Zhang, Junsong
Original Article

ConAn: Measuring and Evaluating User Confidence in Visual Data Analysis Under Uncertainty

Musleh, M.
Ceneda, D.
Ehlers, H.
Raidou, R. G.
Original Article

MoNeRF: Deformable Neural Rendering for Talking Heads via Latent Motion Navigation

Li, X.
Ding, Y.
Li, R.
Tang, Z.
Li, K.
Original Article

HPSCAN: Human Perception‐Based Scattered Data Clustering

Hartwig, S.
Onzenoodt, C. v.
Engel, D.
Hermosilla, P.
Ropinski, T.
Original Article

A Hybrid Lagrangian–Eulerian Formulation of Thin‐Shell Fracture

Fan, L.
Chitalu, F. M.
Komura, T.
Original Article

A Scalable System for Visual Analysis of Ocean Data

Jain, Toshit
Singh, Upkar
Singh, Varun
Boda, Vijay Kumar
Hotz, Ingrid
Vadhiyar, Sathish S.
Vinayachandran, P. N.
Natarajan, Vijay
Original Article

Single‐Shot Example Terrain Sketching by Graph Neural Networks

Liu, Y.
Benes, B.
Original Article

Learning Climbing Controllers for Physics‐Based Characters

Kang, Kyungwon
Gu, Taehong
Kwon, Taesoo
Original Article

MetapathVis: Inspecting the Effect of Metapath in Heterogeneous Network Embedding via Visual Analytics

Li, Quan
Tian, Yun
Wang, Xiyuan
Xie, Laixin
Lin, Dandan
Yi, Lingling
Ma, Xiaojuan
Original Article

A Texture‐Free Practical Model for Realistic Surface‐Based Rendering of Woven Fabrics

Khattar, Apoorv
Zhu, Junqiu
Yan, Ling‐Qi
Montazeri, Zahra
Original Article

MANDALA—Visual Exploration of Anomalies in Industrial Multivariate Time Series Data

Suschnigg, J.
Mutlu, B.
Koutroulis, G.
Hussain, H.
Schreck, T.
Original Article

DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning

Huang, Yuhang
Kanai, Takashi
Original Article

Immersive and Interactive Learning With eDIVE: A Solution for Creating Collaborative VR Education Experiences

Brůža, Vojtěch
Šašinková, Alžběta
Šašinka, Čeněk
Stachoň, Zdeněk
Kozlíková, Barbora
Chmelík, Jiří
Original Article

GeoCode: Interpretable Shape Programs

Pearl, Ofek
Lang, Itai
Hu, Yuhua
Yeh, Raymond A.
Hanocka, Rana
Major Revision from Eurographics Conference

Generalized Lipschitz Tracing of Implicit Surfaces

Bán, Róbert
Valasek, Gábor
Major Revision from Eurographics Conference

A Particle‐Based Approach to Extract Dynamic 3D FTLE Ridge Geometry

Stelter, Daniel
Wilde, Thomas
Rössl, Christian
Theisel, Holger
Major Revision from Eurographics Conference

Continuous Toolpath Optimization for Simultaneous Four‐Axis Subtractive Manufacturing

Zhang, Zhenmin
Shi, Zihan
Zhong, Fanchao
Zhang, Kun
Zhang, Wenjing
Guo, Jianwei
Tu, Changhe
Zhao, Haisen
Major Revision from Eurographics Conference

Survey of Inter‐Prediction Methods for Time‐Varying Mesh Compression

Dvořák, Jan
Hácha, Filip
Arvanitis, Gerasimos
Podgorelec, David
Moustakas, Konstantinos
Váša, Libor
Major Revision from EuroVis Symposium

Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions

Hoque, E.
Islam, M. Saidul
Major Revision from EuroVis Symposium

The State of the Art in User‐Adaptive Visualizations

Yanez, Fernando
Conati, Cristina
Ottley, Alvitta
Nobre, Carolina
Major Revision from Pacific Graphics

THGS: Lifelike Talking Human Avatar Synthesis From Monocular Video Via 3D Gaussian Splatting

Chen, Chuang
Yu, Lingyun
Yang, Quanwei
Zheng, Aihua
Xie, Hongtao
Major Revision from EG Symposium on Rendering

Constrained Spectral Uplifting for HDR Environment Maps

Tódová, L.
Wilkie, A.

Erratum to “Rational Bézier Guarding”



BibTeX (44-Issue 1)
                
@article{
10.1111:cgf.70003,
journal = {Computer Graphics Forum}, title = {{
Lightweight Voronoi Sponza}},
author = {}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70003}
}
                
@article{
10.1111:cgf.15118,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15118}
}
                
@article{
10.1111:cgf.70004,
journal = {Computer Graphics Forum}, title = {{
Editorial}},
author = {
Alliez, Pierre
and
Wimmer, Michael
and
Westermann, Rüdiger
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70004}
}
                
@article{
10.1111:cgf.15198,
journal = {Computer Graphics Forum}, title = {{
A Generative Adversarial Network for Upsampling of Direct Volume Rendering Images}},
author = {
Jin, Ge
and
Jung, Younhyun
and
Fulham, Michael
and
Feng, Dagan
and
Kim, Jinman
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15198}
}
                
@article{
10.1111:cgf.15261,
journal = {Computer Graphics Forum}, title = {{
BI‐LAVA: Biocuration With Hierarchical Image Labelling Through Active Learning and Visual Analytics}},
author = {
Trelles, Juan
and
Wentzel, Andrew
and
Berrios, William
and
Shatkay, Hagit
and
Marai, G. Elisabeta
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15261}
}
                
@article{
10.1111:cgf.15262,
journal = {Computer Graphics Forum}, title = {{
Dynamic Voxel‐Based Global Illumination}},
author = {
Cosin Ayerbe, Alejandro
and
Poulin, Pierre
and
Patow, Gustavo
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15262}
}
                
@article{
10.1111:cgf.15201,
journal = {Computer Graphics Forum}, title = {{
Automatic Inbetweening for Stroke‐Based Painterly Animation}},
author = {
Barroso, Nicolas
and
Fondevilla, Amélie
and
Vanderhaeghe, David
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15201}
}
                
@article{
10.1111:cgf.15264,
journal = {Computer Graphics Forum}, title = {{
Efficient Environment Map Rendering Based on Decomposition}},
author = {
Wu, Yu‐Ting
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15264}
}
                
@article{
10.1111:cgf.15263,
journal = {Computer Graphics Forum}, title = {{
Deep‐Learning‐Based Facial Retargeting Using Local Patches}},
author = {
Choi, Yeonsoo
and
Lee, Inyup
and
Cha, Sihun
and
Kim, Seonghyeon
and
Jung, Sunjin
and
Noh, Junyong
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15263}
}
                
@article{
10.1111:cgf.15265,
journal = {Computer Graphics Forum}, title = {{
Stress‐Aligned Hexahedral Lattice Structures}},
author = {
Bukenberger, D. R.
and
Wang, J.
and
Wu, J.
and
Westermann, R.
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15265}
}
                
@article{
10.1111:cgf.15267,
journal = {Computer Graphics Forum}, title = {{
Detecting, Interpreting and Modifying the Heterogeneous Causal Network in Multi‐Source Event Sequences}},
author = {
Xu, Shaobin
and
Sun, Minghui
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15267}
}
                
@article{
10.1111:cgf.15268,
journal = {Computer Graphics Forum}, title = {{
Light Distribution Models for Tree Growth Simulation}},
author = {
Nauber, Tristan
and
Mäder, Patrick
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15268}
}
                
@article{
10.1111:cgf.15269,
journal = {Computer Graphics Forum}, title = {{
Mesh Simplification for Unfolding*}},
author = {
Bhargava, M.
and
Schreck, C.
and
Freire, M.
and
Hugron, P. A.
and
Lefebvre, S.
and
Sellán, S.
and
Bickel, B.
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15269}
}
                
@article{
10.1111:cgf.15270,
journal = {Computer Graphics Forum}, title = {{
Conditional Font Generation With Content Pre‐Train and Style Filter}},
author = {
Hong, Yang
and
Li, Yinfei
and
Qiao, Xiaojun
and
Zhang, Junsong
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15270}
}
                
@article{
10.1111:cgf.15272,
journal = {Computer Graphics Forum}, title = {{
ConAn: Measuring and Evaluating User Confidence in Visual Data Analysis Under Uncertainty}},
author = {
Musleh, M.
and
Ceneda, D.
and
Ehlers, H.
and
Raidou, R. G.
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15272}
}
                
@article{
10.1111:cgf.15274,
journal = {Computer Graphics Forum}, title = {{
MoNeRF: Deformable Neural Rendering for Talking Heads via Latent Motion Navigation}},
author = {
Li, X.
and
Ding, Y.
and
Li, R.
and
Tang, Z.
and
Li, K.
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15274}
}
                
@article{
10.1111:cgf.15275,
journal = {Computer Graphics Forum}, title = {{
HPSCAN: Human Perception‐Based Scattered Data Clustering}},
author = {
Hartwig, S.
and
Onzenoodt, C. v.
and
Engel, D.
and
Hermosilla, P.
and
Ropinski, T.
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15275}
}
                
@article{
10.1111:cgf.15273,
journal = {Computer Graphics Forum}, title = {{
A Hybrid Lagrangian–Eulerian Formulation of Thin‐Shell Fracture}},
author = {
Fan, L.
and
Chitalu, F. M.
and
Komura, T.
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15273}
}
                
@article{
10.1111:cgf.15279,
journal = {Computer Graphics Forum}, title = {{
A Scalable System for Visual Analysis of Ocean Data}},
author = {
Jain, Toshit
and
Singh, Upkar
and
Singh, Varun
and
Boda, Vijay Kumar
and
Hotz, Ingrid
and
Vadhiyar, Sathish S.
and
Vinayachandran, P. N.
and
Natarajan, Vijay
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15279}
}
                
@article{
10.1111:cgf.15281,
journal = {Computer Graphics Forum}, title = {{
Single‐Shot Example Terrain Sketching by Graph Neural Networks}},
author = {
Liu, Y.
and
Benes, B.
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15281}
}
                
@article{
10.1111:cgf.15284,
journal = {Computer Graphics Forum}, title = {{
Learning Climbing Controllers for Physics‐Based Characters}},
author = {
Kang, Kyungwon
and
Gu, Taehong
and
Kwon, Taesoo
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15284}
}
                
@article{
10.1111:cgf.15285,
journal = {Computer Graphics Forum}, title = {{
MetapathVis: Inspecting the Effect of Metapath in Heterogeneous Network Embedding via Visual Analytics}},
author = {
Li, Quan
and
Tian, Yun
and
Wang, Xiyuan
and
Xie, Laixin
and
Lin, Dandan
and
Yi, Lingling
and
Ma, Xiaojuan
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15285}
}
                
@article{
10.1111:cgf.15283,
journal = {Computer Graphics Forum}, title = {{
A Texture‐Free Practical Model for Realistic Surface‐Based Rendering of Woven Fabrics}},
author = {
Khattar, Apoorv
and
Zhu, Junqiu
and
Yan, Ling‐Qi
and
Montazeri, Zahra
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15283}
}
                
@article{
10.1111:cgf.70000,
journal = {Computer Graphics Forum}, title = {{
MANDALA—Visual Exploration of Anomalies in Industrial Multivariate Time Series Data}},
author = {
Suschnigg, J.
and
Mutlu, B.
and
Koutroulis, G.
and
Hussain, H.
and
Schreck, T.
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70000}
}
                
@article{
10.1111:cgf.70002,
journal = {Computer Graphics Forum}, title = {{
DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning}},
author = {
Huang, Yuhang
and
Kanai, Takashi
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70002}
}
                
@article{
10.1111:cgf.70001,
journal = {Computer Graphics Forum}, title = {{
Immersive and Interactive Learning With eDIVE: A Solution for Creating Collaborative VR Education Experiences}},
author = {
Brůža, Vojtěch
and
Šašinková, Alžběta
and
Šašinka, Čeněk
and
Stachoň, Zdeněk
and
Kozlíková, Barbora
and
Chmelík, Jiří
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70001}
}
                
@article{
10.1111:cgf.15276,
journal = {Computer Graphics Forum}, title = {{
GeoCode: Interpretable Shape Programs}},
author = {
Pearl, Ofek
and
Lang, Itai
and
Hu, Yuhua
and
Yeh, Raymond A.
and
Hanocka, Rana
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15276}
}
                
@article{
10.1111:cgf.15202,
journal = {Computer Graphics Forum}, title = {{
Generalized Lipschitz Tracing of Implicit Surfaces}},
author = {
Bán, Róbert
and
Valasek, Gábor
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15202}
}
                
@article{
10.1111:cgf.15203,
journal = {Computer Graphics Forum}, title = {{
A Particle‐Based Approach to Extract Dynamic 3D FTLE Ridge Geometry}},
author = {
Stelter, Daniel
and
Wilde, Thomas
and
Rössl, Christian
and
Theisel, Holger
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15203}
}
                
@article{
10.1111:cgf.15204,
journal = {Computer Graphics Forum}, title = {{
Continuous Toolpath Optimization for Simultaneous Four‐Axis Subtractive Manufacturing}},
author = {
Zhang, Zhenmin
and
Shi, Zihan
and
Zhong, Fanchao
and
Zhang, Kun
and
Zhang, Wenjing
and
Guo, Jianwei
and
Tu, Changhe
and
Zhao, Haisen
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15204}
}
                
@article{
10.1111:cgf.15278,
journal = {Computer Graphics Forum}, title = {{
Survey of Inter‐Prediction Methods for Time‐Varying Mesh Compression}},
author = {
Dvořák, Jan
and
Hácha, Filip
and
Arvanitis, Gerasimos
and
Podgorelec, David
and
Moustakas, Konstantinos
and
Váša, Libor
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15278}
}
                
@article{
10.1111:cgf.15266,
journal = {Computer Graphics Forum}, title = {{
Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions}},
author = {
Hoque, E.
and
Islam, M. Saidul
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15266}
}
                
@article{
10.1111:cgf.15271,
journal = {Computer Graphics Forum}, title = {{
The State of the Art in User‐Adaptive Visualizations}},
author = {
Yanez, Fernando
and
Conati, Cristina
and
Ottley, Alvitta
and
Nobre, Carolina
}, year = {
2024},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15271}
}
                
@article{
10.1111:cgf.15282,
journal = {Computer Graphics Forum}, title = {{
THGS: Lifelike Talking Human Avatar Synthesis From Monocular Video Via 3D Gaussian Splatting}},
author = {
Chen, Chuang
and
Yu, Lingyun
and
Yang, Quanwei
and
Zheng, Aihua
and
Xie, Hongtao
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15282}
}
                
@article{
10.1111:cgf.15280,
journal = {Computer Graphics Forum}, title = {{
Constrained Spectral Uplifting for HDR Environment Maps}},
author = {
Tódová, L.
and
Wilkie, A.
}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15280}
}
                
@article{
10.1111:cgf.15277,
journal = {Computer Graphics Forum}, title = {{
Erratum to “Rational Bézier Guarding”}},
author = {}, year = {
2025},
publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15277}
}

Browse

Recent Submissions

Now showing 1 - 36 of 36
  • Item
    Lightweight Voronoi Sponza
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025)
  • Item
    Issue Information
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025)
  • Item
    Editorial
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Alliez, Pierre; Wimmer, Michael; Westermann, Rüdiger
  • Item
    A Generative Adversarial Network for Upsampling of Direct Volume Rendering Images
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Jin, Ge; Jung, Younhyun; Fulham, Michael; Feng, Dagan; Kim, Jinman
    Direct volume rendering (DVR) is an important tool for scientific and medical imaging visualization. Modern GPU acceleration has made DVR more accessible; however, the production of high‐quality rendered images with high frame rates is computationally expensive. We propose a deep learning method with a reduced computational demand. We leveraged a conditional generative adversarial network (cGAN) to upsample DVR images (a rendered scene), with a reduced sampling rate to obtain similar visual quality to that of a fully sampled method. Our dvrGAN is combined with a colour‐based loss function that is optimized for DVR images where different structures such as skin, bone, . are distinguished by assigning them distinct colours. The loss function highlights the structural differences between images, by examining pixel‐level colour, and thus helps identify, for instance, small bones in the limbs that may not be evident with reduced sampling rates. We evaluated our method in DVR of human computed tomography (CT) and CT angiography (CTA) volumes. Our method retained image quality and reduced computation time when compared to fully sampled methods and outperformed existing state‐of‐the‐art upsampling methods.
  • Item
    BI‐LAVA: Biocuration With Hierarchical Image Labelling Through Active Learning and Visual Analytics
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Trelles, Juan; Wentzel, Andrew; Berrios, William; Shatkay, Hagit; Marai, G. Elisabeta
    In the biomedical domain, taxonomies organize the acquisition modalities of scientific images in hierarchical structures. Such taxonomies leverage large sets of correct image labels and provide essential information about the importance of a scientific publication, which could then be used in biocuration tasks. However, the hierarchical nature of the labels, the overhead of processing images, the absence or incompleteness of labelled data and the expertise required to label this type of data impede the creation of useful datasets for biocuration. From a multi‐year collaboration with biocurators and text‐mining researchers, we derive an iterative visual analytics and active learning (AL) strategy to address these challenges. We implement this strategy in a system called BI‐LAVA—Biocuration with Hierarchical Image Labelling through Active Learning and Visual Analytics. BI‐LAVA leverages a small set of image labels, a hierarchical set of image classifiers and AL to help model builders deal with incomplete ground‐truth labels, target a hierarchical taxonomy of image modalities and classify a large pool of unlabelled images. BI‐LAVA's front end uses custom encodings to represent data distributions, taxonomies, image projections and neighbourhoods of image thumbnails, which help model builders explore an unfamiliar image dataset and taxonomy and correct and generate labels. An evaluation with machine learning practitioners shows that our mixed human–machine approach successfully supports domain experts in understanding the characteristics of classes within the taxonomy, as well as validating and improving data quality in labelled and unlabelled collections.
  • Item
    Dynamic Voxel‐Based Global Illumination
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Cosin Ayerbe, Alejandro; Poulin, Pierre; Patow, Gustavo
    Global illumination computation in real time has been an objective for Computer Graphics since its inception. Unfortunately, its implementation has challenged up to now the most advanced hardware and software solutions. We propose a real‐time voxel‐based global illumination solution for a single light bounce that handles static and dynamic objects with diffuse materials under a dynamic light source. The combination of ray tracing and voxelization on the GPU offers scalability and performance. Our divide‐and‐win approach, which ray traces separately static and dynamic objects, reduces the re‐computation load with updates of any number of dynamic objects. Our results demonstrate the effectiveness of our approach, allowing the real‐time display of global illumination effects, including colour bleeding and indirect shadows, for complex scenes containing millions of polygons.
  • Item
    Automatic Inbetweening for Stroke‐Based Painterly Animation
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Barroso, Nicolas; Fondevilla, Amélie; Vanderhaeghe, David
    Painterly 2D animation, like the paint‐on‐glass technique, is a tedious task performed by skilled artists, primarily using traditional manual methods. Although CG tools can simplify the creation process, previous works often focus on temporal coherence, which typically results in the loss of the handmade look and feel. In contrast to cartoon animation, where regions are typically filled with smooth gradients, stroke‐based stylized 2D animation requires careful consideration of how shapes are filled, as each stroke may be perceived individually. We propose a method to generate intermediate frames using example keyframes and a motion description. This method allows artists to create only one image for every five to 10 output images in the animation, while the automatically generated intermediate frames provide plausible inbetween frames.
  • Item
    Efficient Environment Map Rendering Based on Decomposition
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Wu, Yu‐Ting
    This paper presents an efficient environment map sampling algorithm designed to render high‐quality, low‐noise images with only a few light samples, making it ideal for real‐time applications. We observe that bright pixels in the environment map produce high‐frequency shading effects, such as sharp shadows and shading, while the rest influence the overall tone of the scene. Building on this insight, our approach differs from existing techniques by categorizing the pixels in an environment map into emissive and non‐emissive regions and developing specialized algorithms tailored to the distinct properties of each region. By decomposing the environment lighting, we ensure that light sources are deposited on bright pixels, leading to more accurate shadows and specular highlights. Additionally, this strategy allows us to exploit the smoothness in the low‐frequency component by rendering a smaller image with more lights, thereby enhancing shading accuracy. Extensive experiments demonstrate that our method significantly reduces shadow artefacts and image noise compared to previous techniques, while also achieving lower numerical errors across a range of illumination types, particularly under limited sample conditions.
  • Item
    Deep‐Learning‐Based Facial Retargeting Using Local Patches
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Choi, Yeonsoo; Lee, Inyup; Cha, Sihun; Kim, Seonghyeon; Jung, Sunjin; Noh, Junyong
    In the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch‐based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re‐enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to stylized characters with considerable variations in facial feature proportion.
  • Item
    Stress‐Aligned Hexahedral Lattice Structures
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Bukenberger, D. R.; Wang, J.; Wu, J.; Westermann, R.
    Maintaining the maximum stiffness of components with as little material as possible is an overarching objective in computational design and engineering. It is well‐established that in stiffness‐optimal designs, material is aligned with orthogonal principal stress directions. In the limit of material volume, this alignment forms micro‐structures resembling quads or hexahedra. Achieving a globally consistent layout of such orthogonal micro‐structures presents a significant challenge, particularly in three‐dimensional settings. In this paper, we propose a novel geometric algorithm for compiling stress‐aligned hexahedral lattice structures. Our method involves deforming an input mesh under load to align the resulting stress field along an orthogonal basis. The deformed object is filled with a hexahedral grid, and the deformation is reverted to recover the original shape. The resulting stress‐aligned mesh is used as basis for a final hollowing procedure, generating a volume‐reduced stiff infill composed of hexahedral micro‐structures. We perform quantitative comparisons with structural optimization and hexahedral meshing approaches and demonstrate the superior mechanical performance of our designs with finite element simulation experiments.
  • Item
    Detecting, Interpreting and Modifying the Heterogeneous Causal Network in Multi‐Source Event Sequences
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Xu, Shaobin; Sun, Minghui
    Uncovering causal relations from event sequences to guide decision‐making has become an essential task across various domains. Unfortunately, this task remains a challenge because real‐world event sequences are usually collected from multiple sources. Most existing works are specifically designed for homogeneous causal analysis between events from a single source, without considering cross‐source causality. In this work, we propose a heterogeneous causal analysis algorithm to detect the heterogeneous causal network between high‐level events in multi‐source event sequences while preserving the causal semantic relationships between diverse data sources. Additionally, the flexibility of our algorithm allows to incorporate high‐level event similarity into learning model and provides a fuzzy modification mechanism. Based on the algorithm, we further propose a visual analytics framework that supports interpreting the causal network at three granularities and offers a multi‐granularity modification mechanism to incorporate user feedback efficiently. We evaluate the accuracy of our algorithm through an experimental study, illustrate the usefulness of our system through a case study, and demonstrate the efficiency of our modification mechanisms through a user study.
  • Item
    Light Distribution Models for Tree Growth Simulation
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Nauber, Tristan; Mäder, Patrick
    The simulation and modelling of tree growth is a complex subject with a long history and an important area of research in both computer graphics and botany. For more than 50 years, new approaches to this topic have been presented frequently, including several aspects to increase realism. To further improve these achievements, we present a compact and robust functional‐structural plant model (FSPM) that is consistent with botanical rules. While we show several extensions to typical approaches, we focus mainly on the distribution of light as a resource in three‐dimensional space. We therefore present four different light distribution models based on ray tracing, space colonization, voxel‐based approaches and bounding volumes. By simulating individual light sources, we were able to create a more specified scene setup for plant simulation than it has been presented in the past. By taking into account such a more accurate distribution of light in the environment, this technique is capable of modelling realistic and diverse tree models.
  • Item
    Mesh Simplification for Unfolding*
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Bhargava, M.; Schreck, C.; Freire, M.; Hugron, P. A.; Lefebvre, S.; Sellán, S.; Bickel, B.
    We present a computational approach for unfolding 3D shapes isometrically into the plane as a single patch without overlapping triangles. This is a hard, sometimes impossible, problem, which existing methods are forced to soften by allowing for map distortions or multiple patches. Instead, we propose a geometric relaxation of the problem: We modify the input shape until it admits an overlap‐free unfolding. We achieve this by locally displacing vertices and collapsing edges, guided by the unfolding process. We validate our algorithm quantitatively and qualitatively on a large dataset of complex shapes and show its proficiency by fabricating real shapes from paper.
  • Item
    Conditional Font Generation With Content Pre‐Train and Style Filter
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hong, Yang; Li, Yinfei; Qiao, Xiaojun; Zhang, Junsong
    Automatic font generation aims to streamline the design process by creating new fonts with minimal style references. This technology significantly reduces the manual labour and costs associated with traditional font design. Image‐to‐image translation has been the dominant approach, transforming font images from a source style to a target style using a few reference images. However, this framework struggles to fully decouple content from style, particularly when dealing with significant style shifts. Despite these limitations, image‐to‐image translation remains prevalent due to two main challenges faced by conditional generative models: (1) inability to handle unseen characters and (2) difficulty in providing precise content representations equivalent to the source font. Our approach tackles these issues by leveraging recent advancements in Chinese character representation research to pre‐train a robust content representation model. This model not only handles unseen characters but also generalizes to non‐existent ones, a capability absent in traditional image‐to‐image translation. We further propose a Transformer‐based Style Filter that not only accurately captures stylistic features from reference images but also handles any combination of them, fostering greater convenience for practical automated font generation applications. Additionally, we incorporate content loss with commonly used pixel‐ and perceptual‐level losses to refine the generated results from a comprehensive perspective. Extensive experiments validate the effectiveness of our method, particularly its ability to handle unseen characters, demonstrating significant performance gains over existing state‐of‐the‐art methods.
  • Item
    ConAn: Measuring and Evaluating User Confidence in Visual Data Analysis Under Uncertainty
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Musleh, M.; Ceneda, D.; Ehlers, H.; Raidou, R. G.
    User confidence plays an important role in guided visual data analysis scenarios, especially when uncertainty is involved in the analytical process. However, measuring confidence in practical scenarios remains an open challenge, as previous work relies primarily on self‐reporting methods. In this work, we propose a quantitative approach to measure user confidence—as opposed to trust—in an analytical scenario. We do so by exploiting the respective user interaction provenance graph and examining the impact of guidance using a set of network metrics. We assess the usefulness of our proposed metrics through a user study that correlates results obtained from self‐reported confidence assessments and our metrics—both with and without guidance. The results suggest that our metrics improve the evaluation of user confidence compared to available approaches. In particular, we found a correlation between self‐reported confidence and some of the proposed provenance network metrics. The quantitative results, though, do not show a statistically significant impact of the guidance on user confidence. An additional descriptive analysis suggests that guidance could impact users' confidence and that the qualitative analysis of the provenance network topology can provide a comprehensive view of changes in user confidence. Our results indicate that our proposed metrics and the provenance network graph representation support the evaluation of user confidence and, subsequently, the effective development of guidance in VA.
  • Item
    MoNeRF: Deformable Neural Rendering for Talking Heads via Latent Motion Navigation
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Li, X.; Ding, Y.; Li, R.; Tang, Z.; Li, K.
    Novel view synthesis for talking heads presents significant challenges due to the complex and diverse motion transformations involved. Conventional methods often resort to reliance on structure priors, like facial templates, to warp observed images into a canonical space conducive to rendering. However, the incorporation of such priors introduces a trade‐off‐while aiding in synthesis, they concurrently amplify model complexity, limiting generalizability to other deformable scenes. Departing from this paradigm, we introduce a pioneering solution: the motion‐conditioned neural radiance field, MoNeRF, designed to model talking heads through latent motion navigation. At the core of MoNeRF lies a novel approach utilizing a compact set of latent codes to represent orthogonal motion directions. This innovative strategy empowers MoNeRF to efficiently capture and depict intricate scene motion by linearly combining these latent codes. In an extended capability, MoNeRF facilitates motion control through latent code adjustments, supports view transfer based on reference videos, and seamlessly extends its applicability to model human bodies without necessitating structural modifications. Rigorous quantitative and qualitative experiments unequivocally demonstrate MoNeRF's superior performance compared to state‐of‐the‐art methods in talking head synthesis. We will release the source code upon publication.
  • Item
    HPSCAN: Human Perception‐Based Scattered Data Clustering
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hartwig, S.; Onzenoodt, C. v.; Engel, D.; Hermosilla, P.; Ropinski, T.
    Cluster separation is a task typically tackled by widely used clustering techniques, such as k‐means or DBSCAN. However, these algorithms are based on non‐perceptual metrics, and our experiments demonstrate that their output does not reflect human cluster perception. To bridge the gap between human cluster perception and machine‐computed clusters, we propose HPSCAN, a learning strategy that operates directly on scattered data. To learn perceptual cluster separation on such data, we crowdsourced the labeling of bivariate (scatterplot) datasets to 384 human participants. We train our HPSCAN model on these human‐annotated data. Instead of rendering these data as scatterplot images, we used their and point coordinates as input to a modified PointNet++ architecture, enabling direct inference on point clouds. In this work, we provide details on how we collected our dataset, report statistics of the resulting annotations, and investigate the perceptual agreement of cluster separation for real‐world data. We also report the training and evaluation protocol for HPSCAN and introduce a novel metric, that measures the accuracy between a clustering technique and a group of human annotators. We explore predicting point‐wise human agreement to detect ambiguities. Finally, we compare our approach to 10 established clustering techniques and demonstrate that HPSCAN is capable of generalizing to unseen and out‐of‐scope data.
  • Item
    A Hybrid Lagrangian–Eulerian Formulation of Thin‐Shell Fracture
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Fan, L.; Chitalu, F. M.; Komura, T.
    The hybrid Lagrangian/Eulerian formulation of continuum shells is highly effective for producing challenging simulations of thin materials like cloth with bending resistance and frictional contact. However, existing formulations are restricted to materials that do not undergo tearing nor fracture due to the difficulties associated with incorporating strong discontinuities of field quantities like velocity via basis enrichment while maintaining continuity or regularity. We propose an extension of this formulation to simulate dynamic tearing and fracturing of thin shells using Kirchhoff–Love continuum theory. Damage, which manifests as cracks or tears, is propagated by tracking the evolution of a time‐dependent phase‐field in the co‐dimensional manifold, where a moving least‐squares (MLS) approximation then captures the strong discontinuities of interpolated field quantities near the crack. Our approach is capable of simulating challenging scenarios of this tearing and fracture, all‐the‐while harnessing the existing benefits of the hybrid Lagrangian/Eulerian formulation to expand the domain of possible effects. The method is also amenable to user‐guided control, which serves to influence the propagation of cracks or tears such that they follow prescribed paths during simulation.
  • Item
    A Scalable System for Visual Analysis of Ocean Data
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Jain, Toshit; Singh, Upkar; Singh, Varun; Boda, Vijay Kumar; Hotz, Ingrid; Vadhiyar, Sathish S.; Vinayachandran, P. N.; Natarajan, Vijay
    Oceanographers rely on visual analysis to interpret model simulations, identify events and phenomena, and track dynamic ocean processes. The ever increasing resolution and complexity of ocean data due to its dynamic nature and multivariate relationships demands a scalable and adaptable visualization tool for interactive exploration. We introduce pyParaOcean, a scalable and interactive visualization system designed specifically for ocean data analysis. pyParaOcean offers specialized modules for common oceanographic analysis tasks, including eddy identification and salinity movement tracking. These modules seamlessly integrate with ParaView as filters, ensuring a user‐friendly and easy‐to‐use system while leveraging the parallelization capabilities of ParaView and a plethora of inbuilt general‐purpose visualization functionalities. The creation of an auxiliary dataset stored as a Cinema database helps address I/O and network bandwidth bottlenecks while supporting the generation of quick overview visualizations. We present a case study on the Bay of Bengal to demonstrate the utility of the system and scaling studies to evaluate the efficiency of the system.
  • Item
    Single‐Shot Example Terrain Sketching by Graph Neural Networks
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Liu, Y.; Benes, B.
    Terrain generation is a challenging problem. Procedural modelling methods lack control, while machine learning methods often need large training datasets and struggle to preserve the topology information. We propose a method that generates a new terrain from a single image for training and a simple user sketch. Our single‐shot method preserves the sketch topology while generating diversified results. Our method is based on a graph neural network (GNN) and builds a detailed relation among the sketch‐extracted features, that is, ridges and valleys and their neighbouring area. By disentangling the influence from different sketches, our model generates visually realistic terrains following the user sketch while preserving the features from the real terrains. Experiments are conducted to show both qualitative and quantitative comparisons. The structural similarity index measure of our generated and real terrains is around 0.8 on average.
  • Item
    Learning Climbing Controllers for Physics‐Based Characters
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Kang, Kyungwon; Gu, Taehong; Kwon, Taesoo
    Despite the growing demand for capturing diverse motions, collecting climbing motion data remains challenging due to difficulties in tracking obscured markers and scanning climbing structures. Additionally, preparing varied routes further adds to the complexities of the data collection process. To address these challenges, this paper introduces a physics‐based climbing controller for synthesizing climbing motions. The proposed method consists of two learning stages. In the first stage, a hanging policy is trained to naturally grasp holds. This policy is then used to generate a dataset containing hold positions, postures, and grip states, forming favourable initial poses. In the second stage, a climbing policy is trained using this dataset to perform actual climbing movements. The episode begins in a state close to the reference climbing motion, enabling the exploration of more natural climbing style states. This policy enables the character to reach the target position while utilizing its limbs more evenly. The experiments demonstrate that the proposed method effectively identifies good climbing postures and enhances limb coordination across environments with varying slopes and hold patterns.
  • Item
    MetapathVis: Inspecting the Effect of Metapath in Heterogeneous Network Embedding via Visual Analytics
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Li, Quan; Tian, Yun; Wang, Xiyuan; Xie, Laixin; Lin, Dandan; Yi, Lingling; Ma, Xiaojuan
    In heterogeneous graphs (HGs), which offer richer network and semantic insights compared to homogeneous graphs, the technique serves as an essential tool for data mining. This technique facilitates the specification of sequences of entity connections, elucidating the semantic composite relationships between various node types for a range of downstream tasks. Nevertheless, selecting the most appropriate metapath from a pool of candidates and assessing its impact presents significant challenges. To address this issue, our study introduces , an interactive visual analytics system designed to assist machine learning (ML) practitioners in comprehensively understanding and comparing the effects of metapaths from multiple fine‐grained perspectives. allows for an in‐depth evaluation of various models generated with different metapaths, aligning HG network information at the individual level with model metrics. It also facilitates the tracking of aggregation processes associated with different metapaths. The effectiveness of our approach is validated through three case studies and a user study, with feedback from domain experts confirming that our system significantly aids ML practitioners in evaluating and comprehending the viability of different metapath designs.
  • Item
    A Texture‐Free Practical Model for Realistic Surface‐Based Rendering of Woven Fabrics
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Khattar, Apoorv; Zhu, Junqiu; Yan, Ling‐Qi; Montazeri, Zahra
    Rendering woven fabrics is challenging due to the complex micro geometry and anisotropy appearance. Conventional solutions either fully model every yarn/ply/fibre for high fidelity at a high computational cost, or ignore details, that produce non‐realistic close‐up renderings. In this paper, we introduce a model that shares the advantages of both. Our model requires only binary patterns as input yet offers all the necessary micro‐level details by adding the yarn/ply/fibre implicitly. Moreover, we design a double‐layer representation to handle light transmission accurately and use a constant timed () approach to accurately and efficiently depict parallax and shadowing‐masking effects in a tandem way. We compare our model with curve‐based and surface‐based, on different patterns, under different lighting and evaluate with photographs to ensure capturing the aforementioned realistic effects.
  • Item
    MANDALA—Visual Exploration of Anomalies in Industrial Multivariate Time Series Data
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Suschnigg, J.; Mutlu, B.; Koutroulis, G.; Hussain, H.; Schreck, T.
    The detection, description and understanding of anomalies in multivariate time series data is an important task in several industrial domains. Automated data analysis provides many tools and algorithms to detect anomalies, while visual interfaces enable domain experts to explore and analyze data interactively to gain insights using their expertise. Anomalies in multivariate time series can be diverse with respect to the dimensions, temporal occurrence and length within a dataset. Their detection and description depend on the analyst's domain, task and background knowledge. Therefore, anomaly analysis is often an underspecified problem. We propose a visual analytics tool called MANDALA (ultivariate omaly etection nd exportion), which uses kernel density estimation to detect anomalies and provides users with visual means to explore and explain them. To assess our algorithm's effectiveness, we evaluate its ability to identify different types of anomalies using a synthetic dataset generated with the GutenTAG anomaly and time series generator. Our approach allows users to define normal data interactively first. Next, they can explore anomaly candidates, their related dimensions and their temporal scope. Our carefully designed visual analytics components include a tailored scatterplot matrix with semantic zooming features that visualize normal data through hexagonal binning plots and overlay candidate anomaly data as scatterplots. In addition, the system supports the analysis on a broader scope involving all dimensions simultaneously or on a smaller scope involving dimension pairs only. We define a taxonomy of important types of anomaly patterns, which can guide the interactive analysis process. The effectiveness of our system is demonstrated through a use case scenario on industrial data conducted with domain experts from the automotive domain and a user study utilizing a public dataset from the aviation domain.
  • Item
    DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Huang, Yuhang; Kanai, Takashi
    In the field of brittle fracture animation, generating realistic destruction animations using physics‐based simulation methods is computationally expensive. While techniques based on Voronoi diagrams or pre‐fractured patterns are effective for real‐time applications, they fail to incorporate collision conditions when determining fractured shapes during runtime. This paper introduces a novel learning‐based approach for predicting fractured shapes based on collision dynamics at runtime. Our approach seamlessly integrates realistic brittle fracture animations with rigid body simulations, utilising boundary element method (BEM) brittle fracture simulations to generate training data. To integrate collision scenarios and fractured shapes into a deep learning framework, we introduce generative geometric segmentation, distinct from both instance and semantic segmentation, to represent 3D fragment shapes. We propose an eight‐dimensional latent code to address the challenge of optimising multiple discrete fracture pattern targets that share similar continuous collision latent codes. This code will follow a discrete normal distribution corresponding to a specific fracture pattern within our latent impulse representation design. This adaptation enables the prediction of fractured shapes using neural discrete representation learning. Our experimental results show that our approach generates considerably more detailed brittle fractures than existing techniques, while the computational time is typically reduced compared to traditional simulation methods at comparable resolutions.
  • Item
    Immersive and Interactive Learning With eDIVE: A Solution for Creating Collaborative VR Education Experiences
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Brůža, Vojtěch; Šašinková, Alžběta; Šašinka, Čeněk; Stachoň, Zdeněk; Kozlíková, Barbora; Chmelík, Jiří
    Virtual reality (VR) technology has become increasingly popular in education as a tool for enhancing learning experiences and engagement. This paper addresses the lack of a suitable tool for creating multi‐user immersive educational content for virtual environments by introducing a novel solution called eDIVE. The solution is designed to facilitate the development of collaborative immersive educational VR experiences. Developed in close collaboration with psychologists and educators, it addresses specific functional needs identified by these professionals. eDIVE allows creators to extensively modify, expand or develop entirely new VR experiences. eDIVE ultimately makes collaborative VR education more accessible and inclusive for all stakeholders. Its utility is demonstrated through exemplary learning scenarios, developed in collaboration with experienced educators, and evaluated through real‐world user studies.
  • Item
    GeoCode: Interpretable Shape Programs
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Pearl, Ofek; Lang, Itai; Hu, Yuhua; Yeh, Raymond A.; Hanocka, Rana
    The task of crafting procedural programs capable of generating structurally valid 3D shapes easily and intuitively remains an elusive goal in computer vision and graphics. Within the graphics community, generating procedural 3D models has shifted to using node graph systems. They allow the artist to create complex shapes and animations through visual programming. Being a high‐level design tool, they made procedural 3D modelling more accessible. However, crafting those node graphs demands expertise and training. We present GeoCode, a novel framework designed to extend an existing node graph system and significantly lower the bar for the creation of new procedural 3D shape programs. Our approach meticulously balances expressiveness and generalization for part‐based shapes. We propose a curated set of new geometric building blocks that are expressive and reusable across domains. We showcase three innovative and expressive programs developed through our technique and geometric building blocks. Our programs enforce intricate rules, empowering users to execute intuitive high‐level parameter edits that seamlessly propagate throughout the entire shape at a lower level while maintaining its validity. To evaluate the user‐friendliness of our geometric building blocks among non‐experts, we conduct a user study that demonstrates their ease of use and highlights their applicability across diverse domains. Empirical evidence shows the superior accuracy of GeoCode in inferring and recovering 3D shapes compared to an existing competitor. Furthermore, our method demonstrates superior expressiveness compared to alternatives that utilize coarse primitives. Notably, we illustrate the ability to execute controllable local and global shape manipulations. Our code, programs, datasets and Blender add‐on are available at .
  • Item
    Generalized Lipschitz Tracing of Implicit Surfaces
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Bán, Róbert; Valasek, Gábor
    We present a versatile and robust framework to render implicit surfaces defined by black‐box functions that only provide function value queries. We assume that the input function is locally Lipschitz continuous; however, we presume no prior knowledge of its Lipschitz constants. Our pre‐processing step generates a discrete acceleration structure, a Lipschitz field, that provides data to infer local and directional Lipschitz upper bounds. These bounds are used to compute safe step sizes along rays during rendering. The Lipschitz field is constructed by generating local polynomial approximations to the input function, then bounding the derivatives of the approximating polynomials. The accuracy of the approximation is controlled by the polynomial degree and the granularity of the spatial resolution used during fitting, which is independent from the resolution of the Lipschitz field. We demonstrate that our process can be implemented in a massively parallel way, enabling straightforward integration into interactive and real‐time modelling workflows. Since the construction only requires function value evaluations, the input surface may be represented either procedurally or as an arbitrarily filtered grid of function samples. We query the original implicit representation upon ray trace, as such, we preserve the geometric and topological details of the input as long as the Lipschitz field supplies conservative estimates. We demonstrate our method on both procedural and discrete implicit surfaces and compare its exact and approximate variants.
  • Item
    A Particle‐Based Approach to Extract Dynamic 3D FTLE Ridge Geometry
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Stelter, Daniel; Wilde, Thomas; Rössl, Christian; Theisel, Holger
    Lagrangian coherent structures (LCS) is an important concept for the visualization of unsteady flows. They describe the boundaries of regions for which material transport stays mostly coherent over time which can help for a better understanding of dynamical systems. One of the most common techniques for their computation is the extraction of ridges from the finite‐time Lyapunov exponent (FTLE) field. FTLE ridges are challenging to extract, both in terms of accuracy and performance, because they expose strong gradients of the underlying field, tend to come close to each other and are dynamic with respect to different time parameters. We present a new method for extracting FTLE ridges for series of integration times which is able to show how coherent regions and their borders evolve over time. Our techniques mainly build on a particle system which is used for sampling the ridges uniformly. This system is highly optimized for the challenges of FTLE ridge extraction. Further, it is able to take advantage of the continuous evolvement of the ridges which makes their sampling for multiple integration times much faster. We test our method on multiple 3D datasets and compare it to the standard Marching Ridges technique. For the extraction examples our method is 13 to over 300 times faster, suggesting a significant advantage.
  • Item
    Continuous Toolpath Optimization for Simultaneous Four‐Axis Subtractive Manufacturing
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Zhang, Zhenmin; Shi, Zihan; Zhong, Fanchao; Zhang, Kun; Zhang, Wenjing; Guo, Jianwei; Tu, Changhe; Zhao, Haisen
    Simultaneous four‐axis machining involves a cutter that moves in all degrees of freedom during carving. This strategy provides higher‐quality surface finishing compared to positional machining. However, it has not been well‐studied in research. In this study, we propose the first end‐to‐end computational framework to optimize the toolpath for fabricating complex models using simultaneous four‐axis subtractive manufacturing. In our technique, we first slice the input 3D model into uniformly distributed 2D layers. For each slicing layer, we perform an accessibility analysis for each intersected contour within this layer. Then, we proceed with over‐segmentation and a bottom‐up connecting process to generate a minimal number of fabricable segments. Finally, we propose post‐processing techniques to further optimize the tool directionand the transfer path between segments. Physical experiments of nine models demonstrate our significant improvements in both fabrication quality and efficiency, compared to the positional strategy and two simultaneous tool paths generated by industry‐standard CAM systems.
  • Item
    Survey of Inter‐Prediction Methods for Time‐Varying Mesh Compression
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Dvořák, Jan; Hácha, Filip; Arvanitis, Gerasimos; Podgorelec, David; Moustakas, Konstantinos; Váša, Libor
    Time‐varying meshes (TVMs), that is mesh sequences with varying connectivity, are a greatly versatile representation of shapes evolving in time, as they allow a surface topology to change or details to appear or disappear at any time during the sequence. This, however, comes at the cost of large storage size. Since 2003, there have been attempts to compress such data efficiently. While the problem may seem trivial at first sight, considering the strong temporal coherence of shapes represented by the individual frames, it turns out that the varying connectivity and the absence of implicit correspondence information that stems from it makes it rather difficult to exploit the redundancies present in the data. Therefore, efficient and general TVM compression is still considered an open problem. We describe and categorize existing approaches while pointing out the current challenges in the field and hint at some related techniques that might be helpful in addressing them. We also provide an overview of the reported performance of the discussed methods and a list of datasets that are publicly available for experiments. Finally, we also discuss potential future trends in the field.
  • Item
    Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hoque, E.; Islam, M. Saidul
    Natural language and visualization are two complementary modalities of human communication that play a crucial role in conveying information effectively. While visualizations help people discover trends, patterns and anomalies in data, natural language descriptions help explain these insights. Thus, combining text with visualizations is a prevalent technique for effectively delivering the core message of the data. Given the rise of natural language generation (NLG), there is a growing interest in automatically creating natural language descriptions for visualizations, which can be used as chart captions, answering questions about charts or telling data‐driven stories. In this survey, we systematically review the state of the art on NLG for visualizations and introduce a taxonomy of the problem. The NLG tasks fall within the domain of natural language interfaces (NLIs) for visualization, an area that has garnered significant attention from both the research community and industry. To narrow down the scope of the survey, we primarily concentrate on the research works that focus on text generation for visualizations. To characterize the NLG problem and the design space of proposed solutions, we pose five Wh‐questions, why and how NLG tasks are performed for visualizations, what the task inputs and outputs are, as well as where and when the generated texts are integrated with visualizations. We categorize the solutions used in the surveyed papers based on these ‘five Wh‐questions’. Finally, we discuss the key challenges and potential avenues for future research in this domain.
  • Item
    The State of the Art in User‐Adaptive Visualizations
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Yanez, Fernando; Conati, Cristina; Ottley, Alvitta; Nobre, Carolina
    Research shows that user traits can modulate the use of visualization systems and have a measurable influence on users' accuracy, speed, and attention when performing visual analysis. This highlights the importance of user‐adaptive visualization that can modify themselves to the characteristics and preferences of the user. However, there are very few such visualization systems, as creating them requires broad knowledge from various sub‐domains of the visualization community. A user‐adaptive system must consider which user traits they adapt to, their adaptation logic and the types of interventions they support. In this STAR, we survey a broad space of existing literature and consolidate them to structure the process of creating user‐adaptive visualizations into five components: Capture Ⓐ from the user and any relevant peripheral information. Perform computational Ⓑ with this input to construct a Ⓒ . Employ Ⓓ logic to identify when and how to introduce Ⓔ . Our novel taxonomy provides a road map for work in this area, describing the rich space of current approaches and highlighting open areas for future work.
  • Item
    THGS: Lifelike Talking Human Avatar Synthesis From Monocular Video Via 3D Gaussian Splatting
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Chen, Chuang; Yu, Lingyun; Yang, Quanwei; Zheng, Aihua; Xie, Hongtao
    Despite the remarkable progress in 3D talking head generation, directly generating 3D talking human avatars still suffers from rigid facial expressions, distorted hand textures and out‐of‐sync lip movements. In this paper, we extend speaker‐specific talking head generation task to and propose a novel pipeline, , that animates lifelike Talking Human avatars using 3D Gaussian Splatting (3DGS). Given speech audio, expression and body poses as input, effectively overcomes the limitations of 3DGS human re‐construction methods in capturing expressive dynamics, such as , from a short monocular video. Firstly, we introduce a simple yet effective for facial dynamics re‐construction, where subtle facial dynamics can be generated by linearly combining the static head model and expression blendshapes. Secondly, a is proposed for lip‐synced mouth movement animation, building connections between speech audio and mouth Gaussian movements. Thirdly, we employ a to optimize these parameters on the fly, which aligns hand movements and expressions better with video input. Experimental results demonstrate that can achieve high‐fidelity 3D talking human avatar animation at 150+ fps on a web‐based rendering system, improving the requirements of real‐time applications. Our project page is at .
  • Item
    Constrained Spectral Uplifting for HDR Environment Maps
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Tódová, L.; Wilkie, A.
    Spectral representation of assets is an important precondition for achieving physical realism in rendering. However, defining assets by their spectral distribution is complicated and tedious. Therefore, it has become general practice to create RGB assets and convert them into their spectral counterparts prior to rendering. This process is called . While a multitude of techniques focusing on reflectance uplifting exist, the current state of the art of uplifting emission for image‐based lighting consists of simply scaling reflectance uplifts. Although this is usable insofar as the obtained overall scene appearance is not unrealistic, the generated emission spectra are only metamers of the original illumination. This, in turn, can cause deviations from the expected appearance even if the rest of the scene corresponds to real‐world data. In a recent publication, we proposed a method capable of uplifting HDR environment maps based on spectral measurements of light sources similar to those present in the maps. To identify the illuminants, we employ an extensive set of emission measurements, and we combine the results with an existing reflectance uplifting method. In addition, we address the problem of environment map capture for the purposes of a spectral rendering pipeline, for which we propose a novel solution. We further extend this work with a detailed evaluation of the method, both in terms of improved colour error and performance.
  • Item
    Erratum to “Rational Bézier Guarding”
    (Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025)