36-Issue 8

Permanent URI for this collection

Issue Information

Issue Information

Articles

Data‐Driven Shape Interpolation and Morphing Editing

Gao, Lin
Chen, Shu‐Yu
Lai, Yu‐Kun
Xia, Shihong
Articles

Stream Line–Based Pattern Search in Flows

Wang, Z.
Esturo, J. Martinez
Seidel, H.‐P.
Weinkauf, T.
Articles

DYVERSO: A Versatile Multi‐Phase Position‐Based Fluids Solution for VFX

Alduán, Iván
Tena, Angel
Otaduy, Miguel A.
Articles

Group Modeling: A Unified Velocity‐Based Approach

Ren, Z.
Charalambous, P.
Bruneau, J.
Peng, Q.
Pettré, J.
Articles

Virtual Inflation of the Cerebral Artery Wall for the Integrated Exploration of OCT and Histology Data

Glaßer, S.
Hoffmann, T.
Boese, A.
Voß, S.
Kalinski, T.
Skalej, M.
Preim, B.
Articles

Real‐Time Oil Painting on Mobile Hardware

Stuyck, Tuur
Da, Fang
Hadap, Sunil
Dutré, Philip
Articles

Integrated Structural–Architectural Design for Interactive Planning

Steiner, B.
Mousavian, E.
Saradj, F. Mehdizadeh
Wimmer, M.
Musialski, P.
Articles

Symmetry‐Aware Mesh Segmentation into Uniform Overlapping Patches

Dessein, A.
Smith, W. A. P.
Wilson, R. C.
Hancock, E. R.
Articles

EACS: Effective Avoidance Combination Strategy

Bruneau, J.
Pettré, J.
Articles

Point Cloud Denoising via Moving RPCA

Mattei, E.
Castrodad, A.
Articles

Extracting Sharp Features from RGB‐D Images

Cao, Y‐P.
Ju, T.
Xu, J.
Hu, S‐M.
Articles

Flow‐Based Temporal Selection for Interactive Volume Visualization

Frey, S.
Ertl, T.
Articles

Ray Accelerator: Efficient and Flexible Ray Tracing on a Heterogeneous Architecture

Barringer, R.
Andersson, M.
Akenine‐Möller, T.
Articles

Visualization of Biomolecular Structures: State of the Art Revisited

Kozlíková, B.
Krone, M.
Falk, M.
Lindow, N.
Baaden, M.
Baum, D.
Viola, I.
Parulek, J.
Hege, H.‐C.
Articles

Texton Noise

Galerne, B.
Leclaire, A.
Moisan, L.
Articles

A Bi‐Directional Procedural Model for Architectural Design

Hua, H.
Articles

Hierarchical Bucket Queuing for Fine‐Grained Priority Scheduling on the GPU

Kerbl, Bernhard
Kenzel, Michael
Schmalstieg, Dieter
Seidel, Hans‐Peter
Steinberger, Markus
Articles

Articulated‐Motion‐Aware Sparse Localized Decomposition

Wang, Yupan
Li, Guiqing
Zeng, Zhichao
He, Huayun
Articles

Visualization of Eye Tracking Data: A Taxonomy and Survey

Blascheck, T.
Kurzhals, K.
Raschke, M.
Burch, M.
Weiskopf, D.
Ertl, T.
Articles

Building a Large Database of Facial Movements for Deformation Model‐Based 3D Face Tracking

Sibbing, Dominik
Kobbelt, Leif
Articles

SketchSoup: Exploratory Ideation Using Design Sketches

Arora, R.
Darolia, I.
Namboodiri, V. P.
Singh, K.
Bousseau, A.
Articles

Category‐Specific Salient View Selection via Deep Convolutional Neural Networks

Kim, Seong‐heum
Tai, Yu‐Wing
Lee, Joon‐Young
Park, Jaesik
Kweon, In So
Articles

Ontology‐Based Representation and Modelling of Synthetic 3D Content: A State‐of‐the‐Art Review

Flotyński, Jakub
Walczak, Krzysztof
Articles

Primal‐Dual Optimization for Fluids

Inglis, T.
Eckert, M.‐L.
Gregson, J.
Thuerey, N.
Articles

Distributed Optimization Framework for Shadow Removal in Multi‐Projection Systems

Tsukamoto, J.
Iwai, D.
Kashima, K.
Articles

Convolutional Sparse Coding for Capturing High‐Speed Video Content

Serrano, Ana
Garces, Elena
Masia, Belen
Gutierrez, Diego
Articles

NeuroLens: Data‐Driven Camera Lens Simulation Using Neural Networks

Zheng, Quan
Zheng, Changwen
Articles

Tree Branch Level of Detail Models for Forest Navigation

Zhang, Xiaopeng
Bao, Guanbo
Meng, Weiliang
Jaeger, Marc
Li, Hongjun
Deussen, Oliver
Chen, Baoquan
Articles

Multi‐Variate Gaussian‐Based Inverse Kinematics

Huang, Jing
Wang, Qi
Fratarcangeli, Marco
Yan, Ke
Pelachaud, Catherine
Articles

Deformation Grammars: Hierarchical Constraint Preservation Under Deformation

Vimont, Ulysse
Rohmer, Damien
Begault, Antoine
Cani, Marie‐Paule
Articles

Detail‐Preserving Explicit Mesh Projection and Topology Matching for Particle‐Based Fluids

Dagenais, F.
Gagnon, J.
Paquette, E.
Articles

The State of the Art in Integrating Machine Learning into Visual Analytics

Endert, A.
Ribarsky, W.
Turkay, C.
Wong, B.L. William
Nabney, I.
Blanco, I. Díaz
Rossi, F.
Articles

Efficient and Reliable Self‐Collision Culling Using Unprojected Normal Cones

Wang, Tongtong
Liu, Zhihua
Tang, Min
Tong, Ruofeng
Manocha, Dinesh
Articles

Tunable Robustness: An Artificial Contact Strategy with Virtual Actuator Control for Balance

Silva, D. B.
Nunes, R. F.
Vidal, C. A.
Cavalcante‐Neto, J. B.
Kry, P. G.
Zordan, V. B.
Articles

Enhancing Urban Façades via LiDAR‐Based Sculpting

Peethambaran, Jiju
Wang, Ruisheng
Articles

Contracting Medial Surfaces Isotropically for Fast Extraction of Centred Curve Skeletons

Li, Lei
Wang, Wencheng
Articles

Hexahedral Meshing With Varying Element Sizes

Xu, Kaoji
Gao, Xifeng
Deng, Zhigang
Chen, Guoning
Articles

Real‐Time Solar Exposure Simulation in Complex Cities

Muñoz‐Pandiella, I.
Bosch, C.
Mérillou, N.
Pueyo, X.
Mérillou, S.
Articles

Partitioning Surfaces Into Quadrilateral Patches: A Survey

Campen, M.
Articles

Intrinsic Light Field Images

Garces, Elena
Echevarria, Jose I.
Zhang, Wen
Wu, Hongzhi
Zhou, Kun
Gutierrez, Diego
Articles

Noise Reduction on G‐Buffers for Monte Carlo Filtering

Moon, Bochang
Iglesias‐Guitian, Jose A.
McDonagh, Steven
Mitchell, Kenny
Articles

A Comprehensive Survey on Sampling‐Based Image Matting

Yao, Guilin
Zhao, Zhijie
Liu, Shaohui
Articles

Geometric Detection Algorithms for Cavities on Protein Surfaces in Molecular Graphics: A Survey

Simões, Tiago
Lopes, Daniel
Dias, Sérgio
Fernandes, Francisco
Pereira, João
Jorge, Joaquim
Bajaj, Chandrajit
Gomes, Abel
Articles

Approximating Planar Conformal Maps Using Regular Polygonal Meshes

Chen, Renjie
Gotsman, Craig
Articles

Regularized Pointwise Map Recovery from Functional Correspondence

Rodolà, E.
Moeller, M.
Cremers, D.
Articles

A Stochastic Film Grain Model for Resolution‐Independent Rendering

Newson, A.
Delon, J.
Galerne, B.
Reviewers

Reviewers



BibTeX (36-Issue 8)
                
@article{
10.1111:cgf.13065,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13065}
}
                
@article{
10.1111:cgf.12991,
journal = {Computer Graphics Forum}, title = {{
Data‐Driven Shape Interpolation and Morphing Editing}},
author = {
Gao, Lin
and
Chen, Shu‐Yu
and
Lai, Yu‐Kun
and
Xia, Shihong
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12991}
}
                
@article{
10.1111:cgf.12990,
journal = {Computer Graphics Forum}, title = {{
Stream Line–Based Pattern Search in Flows}},
author = {
Wang, Z.
and
Esturo, J. Martinez
and
Seidel, H.‐P.
and
Weinkauf, T.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12990}
}
                
@article{
10.1111:cgf.12992,
journal = {Computer Graphics Forum}, title = {{
DYVERSO: A Versatile Multi‐Phase Position‐Based Fluids Solution for VFX}},
author = {
Alduán, Iván
and
Tena, Angel
and
Otaduy, Miguel A.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12992}
}
                
@article{
10.1111:cgf.12993,
journal = {Computer Graphics Forum}, title = {{
Group Modeling: A Unified Velocity‐Based Approach}},
author = {
Ren, Z.
and
Charalambous, P.
and
Bruneau, J.
and
Peng, Q.
and
Pettré, J.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12993}
}
                
@article{
10.1111:cgf.12994,
journal = {Computer Graphics Forum}, title = {{
Virtual Inflation of the Cerebral Artery Wall for the Integrated Exploration of OCT and Histology Data}},
author = {
Glaßer, S.
and
Hoffmann, T.
and
Boese, A.
and
Voß, S.
and
Kalinski, T.
and
Skalej, M.
and
Preim, B.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12994}
}
                
@article{
10.1111:cgf.12995,
journal = {Computer Graphics Forum}, title = {{
Real‐Time Oil Painting on Mobile Hardware}},
author = {
Stuyck, Tuur
and
Da, Fang
and
Hadap, Sunil
and
Dutré, Philip
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12995}
}
                
@article{
10.1111:cgf.12996,
journal = {Computer Graphics Forum}, title = {{
Integrated Structural–Architectural Design for Interactive Planning}},
author = {
Steiner, B.
and
Mousavian, E.
and
Saradj, F. Mehdizadeh
and
Wimmer, M.
and
Musialski, P.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12996}
}
                
@article{
10.1111:cgf.12997,
journal = {Computer Graphics Forum}, title = {{
Symmetry‐Aware Mesh Segmentation into Uniform Overlapping Patches}},
author = {
Dessein, A.
and
Smith, W. A. P.
and
Wilson, R. C.
and
Hancock, E. R.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12997}
}
                
@article{
10.1111:cgf.13066,
journal = {Computer Graphics Forum}, title = {{
EACS: Effective Avoidance Combination Strategy}},
author = {
Bruneau, J.
and
Pettré, J.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13066}
}
                
@article{
10.1111:cgf.13068,
journal = {Computer Graphics Forum}, title = {{
Point Cloud Denoising via Moving RPCA}},
author = {
Mattei, E.
and
Castrodad, A.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13068}
}
                
@article{
10.1111:cgf.13069,
journal = {Computer Graphics Forum}, title = {{
Extracting Sharp Features from RGB‐D Images}},
author = {
Cao, Y‐P.
and
Ju, T.
and
Xu, J.
and
Hu, S‐M.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13069}
}
                
@article{
10.1111:cgf.13070,
journal = {Computer Graphics Forum}, title = {{
Flow‐Based Temporal Selection for Interactive Volume Visualization}},
author = {
Frey, S.
and
Ertl, T.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13070}
}
                
@article{
10.1111:cgf.13071,
journal = {Computer Graphics Forum}, title = {{
Ray Accelerator: Efficient and Flexible Ray Tracing on a Heterogeneous Architecture}},
author = {
Barringer, R.
and
Andersson, M.
and
Akenine‐Möller, T.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13071}
}
                
@article{
10.1111:cgf.13072,
journal = {Computer Graphics Forum}, title = {{
Visualization of Biomolecular Structures: State of the Art Revisited}},
author = {
Kozlíková, B.
and
Krone, M.
and
Falk, M.
and
Lindow, N.
and
Baaden, M.
and
Baum, D.
and
Viola, I.
and
Parulek, J.
and
Hege, H.‐C.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13072}
}
                
@article{
10.1111:cgf.13073,
journal = {Computer Graphics Forum}, title = {{
Texton Noise}},
author = {
Galerne, B.
and
Leclaire, A.
and
Moisan, L.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13073}
}
                
@article{
10.1111:cgf.13074,
journal = {Computer Graphics Forum}, title = {{
A Bi‐Directional Procedural Model for Architectural Design}},
author = {
Hua, H.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13074}
}
                
@article{
10.1111:cgf.13075,
journal = {Computer Graphics Forum}, title = {{
Hierarchical Bucket Queuing for Fine‐Grained Priority Scheduling on the GPU}},
author = {
Kerbl, Bernhard
and
Kenzel, Michael
and
Schmalstieg, Dieter
and
Seidel, Hans‐Peter
and
Steinberger, Markus
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13075}
}
                
@article{
10.1111:cgf.13076,
journal = {Computer Graphics Forum}, title = {{
Articulated‐Motion‐Aware Sparse Localized Decomposition}},
author = {
Wang, Yupan
and
Li, Guiqing
and
Zeng, Zhichao
and
He, Huayun
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13076}
}
                
@article{
10.1111:cgf.13079,
journal = {Computer Graphics Forum}, title = {{
Visualization of Eye Tracking Data: A Taxonomy and Survey}},
author = {
Blascheck, T.
and
Kurzhals, K.
and
Raschke, M.
and
Burch, M.
and
Weiskopf, D.
and
Ertl, T.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13079}
}
                
@article{
10.1111:cgf.13080,
journal = {Computer Graphics Forum}, title = {{
Building a Large Database of Facial Movements for Deformation Model‐Based 3D Face Tracking}},
author = {
Sibbing, Dominik
and
Kobbelt, Leif
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13080}
}
                
@article{
10.1111:cgf.13081,
journal = {Computer Graphics Forum}, title = {{
SketchSoup: Exploratory Ideation Using Design Sketches}},
author = {
Arora, R.
and
Darolia, I.
and
Namboodiri, V. P.
and
Singh, K.
and
Bousseau, A.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13081}
}
                
@article{
10.1111:cgf.13082,
journal = {Computer Graphics Forum}, title = {{
Category‐Specific Salient View Selection via Deep Convolutional Neural Networks}},
author = {
Kim, Seong‐heum
and
Tai, Yu‐Wing
and
Lee, Joon‐Young
and
Park, Jaesik
and
Kweon, In So
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13082}
}
                
@article{
10.1111:cgf.13083,
journal = {Computer Graphics Forum}, title = {{
Ontology‐Based Representation and Modelling of Synthetic 3D Content: A State‐of‐the‐Art Review}},
author = {
Flotyński, Jakub
and
Walczak, Krzysztof
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13083}
}
                
@article{
10.1111:cgf.13084,
journal = {Computer Graphics Forum}, title = {{
Primal‐Dual Optimization for Fluids}},
author = {
Inglis, T.
and
Eckert, M.‐L.
and
Gregson, J.
and
Thuerey, N.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13084}
}
                
@article{
10.1111:cgf.13085,
journal = {Computer Graphics Forum}, title = {{
Distributed Optimization Framework for Shadow Removal in Multi‐Projection Systems}},
author = {
Tsukamoto, J.
and
Iwai, D.
and
Kashima, K.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13085}
}
                
@article{
10.1111:cgf.13086,
journal = {Computer Graphics Forum}, title = {{
Convolutional Sparse Coding for Capturing High‐Speed Video Content}},
author = {
Serrano, Ana
and
Garces, Elena
and
Masia, Belen
and
Gutierrez, Diego
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13086}
}
                
@article{
10.1111:cgf.13087,
journal = {Computer Graphics Forum}, title = {{
NeuroLens: Data‐Driven Camera Lens Simulation Using Neural Networks}},
author = {
Zheng, Quan
and
Zheng, Changwen
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13087}
}
                
@article{
10.1111:cgf.13088,
journal = {Computer Graphics Forum}, title = {{
Tree Branch Level of Detail Models for Forest Navigation}},
author = {
Zhang, Xiaopeng
and
Bao, Guanbo
and
Meng, Weiliang
and
Jaeger, Marc
and
Li, Hongjun
and
Deussen, Oliver
and
Chen, Baoquan
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13088}
}
                
@article{
10.1111:cgf.13089,
journal = {Computer Graphics Forum}, title = {{
Multi‐Variate Gaussian‐Based Inverse Kinematics}},
author = {
Huang, Jing
and
Wang, Qi
and
Fratarcangeli, Marco
and
Yan, Ke
and
Pelachaud, Catherine
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13089}
}
                
@article{
10.1111:cgf.13090,
journal = {Computer Graphics Forum}, title = {{
Deformation Grammars: Hierarchical Constraint Preservation Under Deformation}},
author = {
Vimont, Ulysse
and
Rohmer, Damien
and
Begault, Antoine
and
Cani, Marie‐Paule
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13090}
}
                
@article{
10.1111:cgf.13091,
journal = {Computer Graphics Forum}, title = {{
Detail‐Preserving Explicit Mesh Projection and Topology Matching for Particle‐Based Fluids}},
author = {
Dagenais, F.
and
Gagnon, J.
and
Paquette, E.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13091}
}
                
@article{
10.1111:cgf.13092,
journal = {Computer Graphics Forum}, title = {{
The State of the Art in Integrating Machine Learning into Visual Analytics}},
author = {
Endert, A.
and
Ribarsky, W.
and
Turkay, C.
and
Wong, B.L. William
and
Nabney, I.
and
Blanco, I. Díaz
and
Rossi, F.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13092}
}
                
@article{
10.1111:cgf.13095,
journal = {Computer Graphics Forum}, title = {{
Efficient and Reliable Self‐Collision Culling Using Unprojected Normal Cones}},
author = {
Wang, Tongtong
and
Liu, Zhihua
and
Tang, Min
and
Tong, Ruofeng
and
Manocha, Dinesh
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13095}
}
                
@article{
10.1111:cgf.13096,
journal = {Computer Graphics Forum}, title = {{
Tunable Robustness: An Artificial Contact Strategy with Virtual Actuator Control for Balance}},
author = {
Silva, D. B.
and
Nunes, R. F.
and
Vidal, C. A.
and
Cavalcante‐Neto, J. B.
and
Kry, P. G.
and
Zordan, V. B.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13096}
}
                
@article{
10.1111:cgf.13097,
journal = {Computer Graphics Forum}, title = {{
Enhancing Urban Façades via LiDAR‐Based Sculpting}},
author = {
Peethambaran, Jiju
and
Wang, Ruisheng
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13097}
}
                
@article{
10.1111:cgf.13098,
journal = {Computer Graphics Forum}, title = {{
Contracting Medial Surfaces Isotropically for Fast Extraction of Centred Curve Skeletons}},
author = {
Li, Lei
and
Wang, Wencheng
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13098}
}
                
@article{
10.1111:cgf.13100,
journal = {Computer Graphics Forum}, title = {{
Hexahedral Meshing With Varying Element Sizes}},
author = {
Xu, Kaoji
and
Gao, Xifeng
and
Deng, Zhigang
and
Chen, Guoning
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13100}
}
                
@article{
10.1111:cgf.13152,
journal = {Computer Graphics Forum}, title = {{
Real‐Time Solar Exposure Simulation in Complex Cities}},
author = {
Muñoz‐Pandiella, I.
and
Bosch, C.
and
Mérillou, N.
and
Pueyo, X.
and
Mérillou, S.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13152}
}
                
@article{
10.1111:cgf.13153,
journal = {Computer Graphics Forum}, title = {{
Partitioning Surfaces Into Quadrilateral Patches: A Survey}},
author = {
Campen, M.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13153}
}
                
@article{
10.1111:cgf.13154,
journal = {Computer Graphics Forum}, title = {{
Intrinsic Light Field Images}},
author = {
Garces, Elena
and
Echevarria, Jose I.
and
Zhang, Wen
and
Wu, Hongzhi
and
Zhou, Kun
and
Gutierrez, Diego
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13154}
}
                
@article{
10.1111:cgf.13155,
journal = {Computer Graphics Forum}, title = {{
Noise Reduction on G‐Buffers for Monte Carlo Filtering}},
author = {
Moon, Bochang
and
Iglesias‐Guitian, Jose A.
and
McDonagh, Steven
and
Mitchell, Kenny
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13155}
}
                
@article{
10.1111:cgf.13156,
journal = {Computer Graphics Forum}, title = {{
A Comprehensive Survey on Sampling‐Based Image Matting}},
author = {
Yao, Guilin
and
Zhao, Zhijie
and
Liu, Shaohui
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13156}
}
                
@article{
10.1111:cgf.13158,
journal = {Computer Graphics Forum}, title = {{
Geometric Detection Algorithms for Cavities on Protein Surfaces in Molecular Graphics: A Survey}},
author = {
Simões, Tiago
and
Lopes, Daniel
and
Dias, Sérgio
and
Fernandes, Francisco
and
Pereira, João
and
Jorge, Joaquim
and
Bajaj, Chandrajit
and
Gomes, Abel
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13158}
}
                
@article{
10.1111:cgf.13157,
journal = {Computer Graphics Forum}, title = {{
Approximating Planar Conformal Maps Using Regular Polygonal Meshes}},
author = {
Chen, Renjie
and
Gotsman, Craig
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13157}
}
                
@article{
10.1111:cgf.13160,
journal = {Computer Graphics Forum}, title = {{
Regularized Pointwise Map Recovery from Functional Correspondence}},
author = {
Rodolà, E.
and
Moeller, M.
and
Cremers, D.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13160}
}
                
@article{
10.1111:cgf.13159,
journal = {Computer Graphics Forum}, title = {{
A Stochastic Film Grain Model for Resolution‐Independent Rendering}},
author = {
Newson, A.
and
Delon, J.
and
Galerne, B.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13159}
}
                
@article{
10.1111:cgf.13318,
journal = {Computer Graphics Forum}, title = {{
Reviewers}},
author = {}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13318}
}

Browse

Recent Submissions

Now showing 1 - 48 of 48
  • Item
    Issue Information
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Min and Zhang, Hao (Richard)
  • Item
    Data‐Driven Shape Interpolation and Morphing Editing
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Gao, Lin; Chen, Shu‐Yu; Lai, Yu‐Kun; Xia, Shihong; Chen, Min and Zhang, Hao (Richard)
    Shape interpolation has many applications in computer graphics such as morphing for computer animation. In this paper, we propose a novel data‐driven mesh interpolation method. We adapt patch‐based linear rotational invariant coordinates to effectively represent deformations of models in a shape collection, and utilize this information to guide the synthesis of interpolated shapes. Unlike previous data‐driven approaches, we use a rotation/translation invariant representation which defines the plausible deformations in a global continuous space. By effectively exploiting the knowledge in the shape space, our method produces realistic interpolation results at interactive rates, outperforming state‐of‐the‐art methods for challenging cases. We further propose a novel approach to interactive editing of shape morphing according to the shape distribution. The user can explore the morphing path and select example models intuitively and adjust the path with simple interactions to edit the morphing sequences. This provides a useful tool to allow users to generate desired morphing with little effort. We demonstrate the effectiveness of our approach using various examples.Shape interpolation has many applications in computer graphics such as morphing for computer animation. In this paper, we propose a novel data‐driven mesh interpolation method. We adapt patch‐based linear rotational invariant coordinates to effectively represent deformations of models in a shape collection, and utilize this information to guide the synthesis of interpolated shapes. Unlike previous data‐driven approaches, we use a rotation/translation invariant representation which defines the plausible deformations in a global continuous space. By effectively exploiting the knowledge in the shape space, our method produces realistic interpolation results at interactive rates, outperforming state‐of‐the‐art methods for challenging cases.
  • Item
    Stream Line–Based Pattern Search in Flows
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Wang, Z.; Esturo, J. Martinez; Seidel, H.‐P.; Weinkauf, T.; Chen, Min and Zhang, Hao (Richard)
    We propose a method that allows users to define flow features in form of patterns represented as sparse sets of stream line segments. Our approach finds similar occurrences in the same or other time steps. Related approaches define patterns using dense, local stencils or support only single segments. Our patterns are defined sparsely and can have a significant extent, i.e., they are integration‐based and not local. This allows for a greater flexibility in defining features of interest. Similarity is measured using intrinsic curve properties only, which enables invariance to location, orientation, and scale. Our method starts with splitting stream lines using globally consistent segmentation criteria. It strives to maintain the visually apparent features of the flow as a collection of stream line segments. Most importantly, it provides similar segmentations for similar flow structures. For user‐defined patterns of curve segments, our algorithm finds similar ones that are invariant to similarity transformations. We showcase the utility of our method using different 2D and 3D flow fields.We propose a method that allows users to define flow features in form of patterns represented as sparse sets of stream line segments. Our approach finds similar occurrences in the same or other time steps. Related approaches define patterns using dense, local stencils or support only single segments. Our patterns are defined sparsely and can have a significant extent, i.e., they are integration‐based and not local. This allows for a greater flexibility in defining features of interest. Similarity is measured using intrinsic curve properties only, which enables invariance to location, orientation, and scale.
  • Item
    DYVERSO: A Versatile Multi‐Phase Position‐Based Fluids Solution for VFX
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Alduán, Iván; Tena, Angel; Otaduy, Miguel A.; Chen, Min and Zhang, Hao (Richard)
    Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands. Our simulation framework handles multi‐phase interactions robustly thanks to a modified constraint formulation for density contrast PBF. And, it also supports the interaction of fluids sampled at different resolutions. We put special care on data structure design and implementation details. Our framework highlights cache‐efficient GPU‐friendly data structures, an improved spatial voxelization technique based on Z‐index sorting, tuned‐up simulation algorithms and two‐way‐coupled collision handling based on VDB fields. Altogether, our fluid simulation framework empowers artists with the efficiency, scalability and versatility needed for simulating very diverse scenes and effects.Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands.
  • Item
    Group Modeling: A Unified Velocity‐Based Approach
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Ren, Z.; Charalambous, P.; Bruneau, J.; Peng, Q.; Pettré, J.; Chen, Min and Zhang, Hao (Richard)
    Crowd simulators are commonly used to populate movie or game scenes in the entertainment industry. Even though it is crucial to consider the presence of groups for the believability of a virtual crowd, most crowd simulations only take into account individual characters or a limited set of group behaviors. We introduce a unified solution that allows for simulations of crowds that have diverse group properties such as social groups, marches, tourists and guides, etc. We extend the Velocity Obstacle approach for agent‐based crowd simulations by introducing Velocity Connection; the set of velocities that keep agents moving together while avoiding collisions and achieving goals. We demonstrate our approach to be robust, controllable, and able to cover a large set of group behaviors.Crowd simulators are commonly used to populate movie or game scenes in the entertainment industry. Even though it is crucial to consider the presence of groups for the believability of a virtual crowd, most crowd simulations only take into account individual characters or a limited set of group behaviors. We introduce a unified solution that allows for simulations of crowds that have diverse group properties such as social groups, marches, tourists and guides, etc. We extend the Velocity Obstacle approach for agent‐based crowd simulations by introducing Velocity Connection; the set of velocities that keep agents moving together while avoiding collisions and achieving goals. We demonstrate our approach to be robust, controllable, and able to cover a large set of group behaviors.
  • Item
    Virtual Inflation of the Cerebral Artery Wall for the Integrated Exploration of OCT and Histology Data
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Glaßer, S.; Hoffmann, T.; Boese, A.; Voß, S.; Kalinski, T.; Skalej, M.; Preim, B.; Chen, Min and Zhang, Hao (Richard)
    Intravascular imaging provides new insights into the condition of vessel walls. This is crucial for cerebrovascular diseases including stroke and cerebral aneurysms, where it may present an important factor for indication of therapy. In this work, we provide new information of cerebral artery walls by combining ex vivo optical coherence tomography (OCT) imaging with histology data sets. To overcome the obstacles of deflated and collapsed vessels due to the missing blood pressure, the lack of co‐alignment as well as the geometrical shape deformations due to catheter probing, we developed the new image processing method . We locally sample the vessel wall thickness based on the (deflated) vessel lumen border instead of the vessel's centerline. Our method is embedded in a multi‐view framework where correspondences between OCT and histology can be highlighted via brushing and linking yielding OCT signal characteristics of the cerebral artery wall and its pathologies. Finally, we enrich the data views with a hierarchical clustering representation which is linked via virtual inflation and further supports the deduction of vessel wall pathologies.Intravascular imaging provides new insights into the condition of vessel walls. This is crucial for cerebrovascular diseases including stroke and cerebral aneurysms, where it may present an important factor for indication of therapy. In this work, we provide new information of cerebral artery walls by combining ex vivo optical coherence tomography (OCT) imaging with histology data sets. To overcome the obstacles of deflated and collapsed vessels due to the missing blood pressure, the lack of co‐alignment as well as the geometrical shape deformations due to catheter probing, we developed the new image processing method .
  • Item
    Real‐Time Oil Painting on Mobile Hardware
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Stuyck, Tuur; Da, Fang; Hadap, Sunil; Dutré, Philip; Chen, Min and Zhang, Hao (Richard)
    This paper presents a realistic digital oil painting system, specifically targeted at the real‐time performance on highly resource‐constrained portable hardware such as tablets and iPads. To effectively use the limited computing power, we develop an efficient adaptation of the shallow water equations that models all the characteristic properties of oil paint. The pigments are stored in a multi‐layered structure to model the peculiar nature of pigment mixing in oil paint. The user experience ranges from thick shape‐retaining strokes to runny diluted paint that reacts naturally to the gravity set by tablet orientation. Finally, the paint is rendered in real time using a combination of carefully chosen efficient rendering techniques. The virtual lighting adapts to the tablet orientation, or alternatively, the front‐facing camera captures the lighting environment, which leads to a truly immersive user experience. Our proposed features are evaluated via a user study. In our experience, our system enables artists to quickly try out ideas and compositions anywhere when inspiration strikes, in a truly ubiquitous way. They do not need to carry expensive and messy oil paint supplies.: This paper presents a realistic digital oil painting system, specifically targeted at the real‐time performance on highly resource‐constrained portable hardware such as tablets and iPads. To effectively use the limited computing power, we develop an efficient adaptation of the shallow water equations that models all the characteristic properties of oil paint. The pigments are stored in a multi‐layered structure to model the peculiar nature of pigment mixing in oil paint. The user experience ranges from thick shape‐retaining strokes to runny diluted paint that reacts naturally to the gravity set by tablet orientation. Finally, the paint is rendered in real time using a combination of carefully chosen efficient rendering techniques.
  • Item
    Integrated Structural–Architectural Design for Interactive Planning
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Steiner, B.; Mousavian, E.; Saradj, F. Mehdizadeh; Wimmer, M.; Musialski, P.; Chen, Min and Zhang, Hao (Richard)
    Traditionally, building floor plans are designed by architects with their usability, functionality and architectural aesthetics in mind; however, the structural properties of the distribution of load‐bearing walls and columns are usually not taken into account at this stage. In this paper, we propose a novel approach for the design of architectural floor plans by integrating structural layout analysis directly into the planning process. In order to achieve this, we introduce a planning tool which interactively enforces checks for structural stability of the current design, and which on demand proposes how to stabilize it if necessary. Technically, our solution contains an interactive architectural modelling framework as well as a constrained optimization module where both are based on respective architectural rules. Using our tool, an architect can predict already in a very early planning stage whose designs are structurally sound such that later changes due to stability reasons can be prevented. We compare manually computed solutions with optimal results of our proposed automated design process in order to show how much our proposed system can help architects to improve the process of laying out structural models optimally.Traditionally, building floor plans are designed by architects with their usability, functionality and architectural aesthetics in mind; however, the structural properties of the distribution of load‐bearing walls and columns are usually not taken into account at this stage. In this paper, we propose a novel approach for the design of architectural floor plans by integrating structural layout analysis directly into the planning process. In order to achieve this, we introduce a planning tool which interactively enforces checks for structural stability of the current design, and which on demand proposes how to stabilize it if necessary. Technically, our solution contains an interactive architectural modelling framework as well as a constrained optimization module where both are based on respective architectural rules.
  • Item
    Symmetry‐Aware Mesh Segmentation into Uniform Overlapping Patches
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Dessein, A.; Smith, W. A. P.; Wilson, R. C.; Hancock, E. R.; Chen, Min and Zhang, Hao (Richard)
    We present intrinsic methods to address the fundamental problem of segmenting a mesh into a specified number of patches with a uniform size and a controllable overlap. Although never addressed in the literature, such a segmentation is useful for a wide range of processing operations where patches represent local regions and overlaps regularize solutions in neighbour patches. Further, we propose a symmetry‐aware distance measure and symmetric modification to furthest‐point sampling, so that our methods can operate on semantically symmetric meshes. We introduce quantitative measures of patch size uniformity and symmetry, and show that our segmentation outperforms state‐of‐the‐art alternatives in experiments on a well‐known dataset. We also use our segmentation in illustrative applications to texture stitching and synthesis where we improve results over state‐of‐the‐art approaches.We present intrinsic methods to address the fundamental problem of segmenting a mesh into a specified number of patches with a uniform size and a controllable overlap. Although never addressed in the literature, such a segmentation is useful for a wide range of processing operations where patches represent local regions and overlaps regularize solutions in neighbour patches. Further, we propose a symmetry‐aware distance measure and symmetric modification to furthest‐point sampling, so that our methods can operate on semantically symmetric meshes. We introduce quantitative measures of patch size uniformity and symmetry, and show that our segmentation outperforms state‐of‐the‐art alternatives in experiments on a well‐known dataset.
  • Item
    EACS: Effective Avoidance Combination Strategy
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Bruneau, J.; Pettré, J.; Chen, Min and Zhang, Hao (Richard)
    When navigating in crowds, humans are able to move efficiently between people. They look ahead to know which path would reduce the complexity of their interactions with others. Current navigation systems for virtual agents consider long‐term planning to find a path in the static environment and short‐term reactions to avoid collisions with close obstacles. Recently some mid‐term considerations have been added to avoid high density areas. However, there is no mid‐term planning among static and dynamic obstacles that would enable the agent to look ahead and avoid difficult paths or find easy ones as humans do. In this paper, we present a system for such mid‐term planning. This system is added to the navigation process between pathfinding and local avoidance to improve the navigation of virtual agents. We show the capacities of such a system using several case studies. Finally we use an energy criterion to compare trajectories computed with and without the mid‐term planning.When navigating in crowds, humans are able to move efficiently between people. They look ahead to know which path would reduce the complexity of their interactions with others. Current navigation systems for virtual agents consider long‐term planning to find a path in the static environment and short‐term reactions to avoid collisions with close obstacles. Recently some mid‐term considerations have been added to avoid high density areas. However, there is no mid‐term planning among static and dynamic obstacles that would enable the agent to look ahead and avoid difficult paths or find easy ones as humans do. In this paper, we present a system for such mid‐term planning.
  • Item
    Point Cloud Denoising via Moving RPCA
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Mattei, E.; Castrodad, A.; Chen, Min and Zhang, Hao (Richard)
    We present an algorithm for the restoration of noisy point cloud data, termed Moving Robust Principal Components Analysis (MRPCA). We model the point cloud as a collection of overlapping two‐dimensional subspaces, and propose a model that encourages collaboration between overlapping neighbourhoods. Similar to state‐of‐the‐art sparse modelling‐based image denoising, the estimated point positions are computed by local averaging. In addition, the proposed approach models grossly corrupted observations explicitly, does not require oriented normals, and takes into account both local and global structure. Sharp features are preserved via a weighted ℓ minimization, where the weights measure the similarity between normal vectors in a local neighbourhood. The proposed algorithm is compared against existing point cloud denoising methods, obtaining competitive results.We present an algorithm for the restoration of noisy point cloud data, termed Moving Robust Principal Components Analysis (MRPCA). We model the point cloud as a collection of overlapping two‐dimensional subspaces, and propose a model that encourages collaboration between overlapping neighbourhoods. Similar to state‐of‐the‐art sparse modelling‐based image denoising, the estimated point positions are computed by local averaging. In addition, the proposed approach models grossly corrupted observations explicitly, does not require oriented normals, and takes into account both local and global structure. Sharp features are preserved via a weighted ℓ minimization, where the weights measure the similarity between normal vectors in a local neighbourhood. The proposed algorithm is compared against existing point cloud denoising methods, obtaining competitive results.
  • Item
    Extracting Sharp Features from RGB‐D Images
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Cao, Y‐P.; Ju, T.; Xu, J.; Hu, S‐M.; Chen, Min and Zhang, Hao (Richard)
    Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.
  • Item
    Flow‐Based Temporal Selection for Interactive Volume Visualization
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Frey, S.; Ertl, T.; Chen, Min and Zhang, Hao (Richard)
    We present an approach to adaptively select time steps from time‐dependent volume data sets for an integrated and comprehensive visualization. This reduced set of time steps not only saves cost, but also allows to show both the spatial structure and temporal development in one combined rendering. Our selection optimizes the coverage of the complete data on the basis of a minimum‐cost flow‐based technique to determine meaningful distances between time steps. As both optimal solutions of the involved transport and selection problem are prohibitively expensive, we present new approaches that are significantly faster with only minor deviations. We further propose an adaptive scheme for the progressive incorporation of new time steps. An interactive volume raycaster produces an integrated rendering of the selected time steps, and their computed differences are visualized in a dedicated chart to provide additional temporal similarity information. We illustrate and discuss the utility of our approach by means of different data sets from measurements and simulation.We present an approach to adaptively select time steps from time‐dependent volume data sets for an integrated and comprehensive visualization. This reduced set of time steps not only saves cost, but also allows to show both the spatial structure and temporal development in one combined rendering. Our selection optimizes the coverage of the complete data on the basis of a minimum‐cost flow‐based technique to determine meaningful distances between time steps. As both optimal solutions of the involved transport and selection problem are prohibitively expensive, we present new approaches that are significantly faster with only minor deviations. We further propose an adaptive scheme for the progressive incorporation of new time steps.
  • Item
    Ray Accelerator: Efficient and Flexible Ray Tracing on a Heterogeneous Architecture
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Barringer, R.; Andersson, M.; Akenine‐Möller, T.; Chen, Min and Zhang, Hao (Richard)
    We present a hybrid ray tracing system, where the work is divided between the CPU cores and the GPU in an integrated chip, and communication occurs via shared memory. Rays are organized in large packets that can be distributed among the two units as needed. Testing visibility between rays and the scene is mostly performed using an optimized kernel on the GPU, but the CPU can help as necessary. The CPU cores typically handle most or all shading, which makes it easy to support complex appearances. For efficiency, the CPU cores shade whole batches of rays by sorting them on material and shading each material using a vectorized kernel. In addition, we introduce a method to support light paths with arbitrary recursion, such as multiple recursive Whitted‐style ray tracing and adaptive sampling where the result of a ray is examined before sending the next, while still batching up rays for the benefit of GPU‐accelerated traversal and vectorized shading. This allows our system to achieve high rendering performance while maintaining the flexibility to accommodate different rendering algorithms.We present a hybrid ray tracing system, where the work is divided between the CPU cores and the GPU in an integrated chip, and communication occurs via shared memory. Rays are organized in large packets that can be distributed among the two units as needed. Testing visibility between rays and the scene is mostly performed using an optimized kernel on the GPU, but the CPU can help as necessary. The CPU cores typically handle most or all shading, which makes it easy to support complex appearances. For efficiency, the CPU cores shade whole batches of rays by sorting them on material and shading each material using a vectorized kernel.
  • Item
    Visualization of Biomolecular Structures: State of the Art Revisited
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Kozlíková, B.; Krone, M.; Falk, M.; Lindow, N.; Baaden, M.; Baum, D.; Viola, I.; Parulek, J.; Hege, H.‐C.; Chen, Min and Zhang, Hao (Richard)
    Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three‐dimensional, complex, large and time‐varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail and reviews visualizations illustrating the dynamic aspects of molecular simulation data. The survey concludes with an outlook on promising and important research topics to foster further success in the development of tools that help to reveal molecular secrets.Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three‐dimensional, complex, large and time‐varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail and reviews visualizations illustrating the dynamic aspects of molecular simulation data.
  • Item
    Texton Noise
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Galerne, B.; Leclaire, A.; Moisan, L.; Chen, Min and Zhang, Hao (Richard)
    Designing realistic noise patterns from scratch is hard. To solve this problem, recent contributions have proposed involved spectral analysis algorithms that enable procedural noise models to faithfully reproduce some class of textures. The aim of this paper is to propose the simplest and most efficient noise model that allows for the reproduction of any Gaussian texture. is a simple sparse convolution noise that sums randomly scattered copies of a small bilinear texture called . We introduce an automatic algorithm to compute the texton associated with an input texture image that concentrates the input frequency content into the desired texton support. One of the main features of texton noise is that its evaluation only consists to sum 30 texture fetches on average. Consequently, texton noise generates Gaussian textures with an unprecedented evaluation speed for noise by example. A second main feature of texton noise is that it allows for high‐quality on‐the‐fly anisotropic filtering by simply invoking existing GPU hardware solutions for texture fetches. In addition, we demonstrate that texton noise can be applied on any surface using parameterization‐free surface noise and that it allows for noise mixing.Designing realistic noise patterns from scratch is hard. To solve this problem, recent contributions have proposed involved spectral analysis algorithms that enable procedural noise models to faithfully reproduce some class of textures. The aim of this paper is to propose the simplest and most efficient noise model that allows for the reproduction of any Gaussian texture. Texton noise is a simple sparse convolution noise that sums randomly scattered copies of a small bilinear texture called texton. We introduce an automatic algorithm to compute the texton associated with an input texture image that concentrates the input frequency content into the desired texton support.
  • Item
    A Bi‐Directional Procedural Model for Architectural Design
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Hua, H.; Chen, Min and Zhang, Hao (Richard)
    It is a challenge for shape grammars to incorporate spatial hierarchy and interior connectivity of buildings in early design stages. To resolve this difficulty, we developed a bi‐directional procedural model: the forward process constructs the derivation tree with production rules, while the backward process realizes the tree with shapes in a stepwise manner (from leaves to the root). Each inverse‐derivation step involves essential geometric‐topological reasoning. With this bi‐directional framework, design constraints and objectives are encoded in the grammar‐shape translation. We conducted two applications. The first employs geometric primitives as terminals and the other uses previous designs as terminals. Both approaches lead to consistent interior connectivity and a rich spatial hierarchy. The results imply that bespoke geometric‐topological processing helps shape grammar to create plausible, novel compositions. Our model is more productive than hand‐coded shape grammars, while it is less computation‐intensive than evolutionary treatment of shape grammars.It is a challenge for shape grammars to incorporate spatial hierarchy and interior connectivity of buildings in early design stages. To resolve this difficulty, we developed a bi‐directional procedural model: the forward process constructs the derivation tree with production rules, while the backward process realizes the tree with shapes in a stepwise manner (from leaves to the root). Each inverse‐derivation step involves essential geometric‐topological reasoning. With this bi‐directional framework, design constraints and objectives are encoded in the grammar‐shape translation.
  • Item
    Hierarchical Bucket Queuing for Fine‐Grained Priority Scheduling on the GPU
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Kerbl, Bernhard; Kenzel, Michael; Schmalstieg, Dieter; Seidel, Hans‐Peter; Steinberger, Markus; Chen, Min and Zhang, Hao (Richard)
    While the modern graphics processing unit (GPU) offers massive parallel compute power, the ability to influence the scheduling of these immense resources is severely limited. Therefore, the GPU is widely considered to be only suitable as an externally controlled co‐processor for homogeneous workloads which greatly restricts the potential applications of GPU computing. To address this issue, we present a new method to achieve fine‐grained priority scheduling on the GPU: hierarchical bucket queuing. By carefully distributing the workload among multiple queues and efficiently deciding which queue to draw work from next, we enable a variety of scheduling strategies. These strategies include fair‐scheduling, earliest‐deadline‐first scheduling and user‐defined dynamic priority scheduling. In a comparison with a sorting‐based approach, we reveal the advantages of hierarchical bucket queuing over previous work. Finally, we demonstrate the benefits of using priority scheduling in real‐world applications by example of path tracing and foveated micropolygon rendering.While the modern graphics processing unit (GPU) offers massive parallel compute power, the ability to influence the scheduling of these immense resources is severely limited. Therefore, the GPU is widely considered to be only suitable as an externally controlled co‐processor for homogeneous workloads which greatly restricts the potential applications of GPU computing. To address this issue, we present a new method to achieve fine‐grained priority scheduling on the GPU: hierarchical bucket queuing. By carefully distributing the workload among multiple queues and efficiently deciding which queue to draw work from next, we enable a variety of scheduling strategies. These strategies include fair‐scheduling, earliest‐deadline‐first scheduling and user‐defined dynamic priority scheduling.
  • Item
    Articulated‐Motion‐Aware Sparse Localized Decomposition
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Wang, Yupan; Li, Guiqing; Zeng, Zhichao; He, Huayun; Chen, Min and Zhang, Hao (Richard)
    Compactly representing time‐varying geometries is an important issue in dynamic geometry processing. This paper proposes a framework of sparse localized decomposition for given animated meshes by analyzing the variation of edge lengths and dihedral angles (LAs) of the meshes. It first computes the length and dihedral angle of each edge for poses and then evaluates the difference (residuals) between the LAs of an arbitrary pose and their counterparts in a reference one. Performing sparse localized decomposition on the residuals yields a set of components which can perfectly capture local motion of articulations. It supports intuitive articulation motion editing through manipulating the blending coefficients of these components. To robustly reconstruct poses from altered LAs, we devise a connection‐map‐based algorithm which consists of two steps of linear optimization. A variety of experiments show that our decomposition is truly localized with respect to rotational motions and outperforms state‐of‐the‐art approaches in precisely capturing local articulated motion.Compactly representing time‐varying geometries is an important issue in dynamic geometry processing. This paper proposes a framework of sparse localized decomposition for given animated meshes by analysing the variation of edge lengths and dihedral angles (LAs) of the meshes. It first computes the length and dihedral angle of each edge for poses and then evaluates the difference (residuals) between the LAs of an arbitrary pose and their counterparts in a reference one. Performing sparse localized decomposition on the residuals yields a set of components which can perfectly capture local motion of articulations.
  • Item
    Visualization of Eye Tracking Data: A Taxonomy and Survey
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Blascheck, T.; Kurzhals, K.; Raschke, M.; Burch, M.; Weiskopf, D.; Ertl, T.; Chen, Min and Zhang, Hao (Richard)
    This survey provides an introduction into eye tracking visualization with an overview of existing techniques. Eye tracking is important for evaluating user behaviour. Analysing eye tracking data is typically done quantitatively, applying statistical methods. However, in recent years, researchers have been increasingly using qualitative and exploratory analysis methods based on visualization techniques. For this state‐of‐the‐art report, we investigated about 110 research papers presenting visualization techniques for eye tracking data. We classified these visualization techniques and identified two main categories: point‐based methods and methods based on areas of interest. Additionally, we conducted an expert review asking leading eye tracking experts how they apply visualization techniques in their analysis of eye tracking data. Based on the experts' feedback, we identified challenges that have to be tackled in the future so that visualizations will become even more widely applied in eye tracking research.This survey provides an introduction into eye tracking visualization with an overview of existing techniques. Eye tracking is important for evaluating user behaviour. Analysing eye tracking data is typically done quantitatively, applying statistical methods. However, in recent years, researchers have been increasingly using qualitative and exploratory analysis methods based on visualization techniques. For this state‐of‐the‐art report, we investigated about 110 research papers presenting visualization techniques for eye tracking data.
  • Item
    Building a Large Database of Facial Movements for Deformation Model‐Based 3D Face Tracking
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Sibbing, Dominik; Kobbelt, Leif; Chen, Min and Zhang, Hao (Richard)
    We introduce a new markerless 3D face tracking approach for 2D videos captured by a single consumer grade camera. Our approach takes detected 2D facial features as input and matches them with projections of 3D features of a deformable model to determine its pose and shape. To make the tracking and reconstruction more robust we add a smoothness prior for pose and deformation changes of the faces. Our major contribution lies in the formulation of the deformation prior which we derive from a large database of facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames. In order to keep the deformation model compact and independent from the individual physiognomy, we represent it by deformation gradients (instead of vertex positions) and apply a principal component analysis in deformation gradient space to extract the major modes of facial deformation. Since the facial deformation is optimized during tracking, it is particularly easy to apply them to other physiognomies and thereby re‐target the facial expressions. We demonstrate the effectiveness of our technique on a number of examples.We introduce a new markerless 3D face tracking approach for 2D videos captured by a single consumer grade camera. Our approach takes detected 2D facial features as input and matches them with projections of 3D features of a deformable model to determine its pose and shape. To make the tracking and reconstruction more robust we add a smoothness prior for pose and deformation changes of the faces. Our major contribution lies in the formulation of the deformation prior which we derive from a large database of facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames.
  • Item
    SketchSoup: Exploratory Ideation Using Design Sketches
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Arora, R.; Darolia, I.; Namboodiri, V. P.; Singh, K.; Bousseau, A.; Chen, Min and Zhang, Hao (Richard)
    A hallmark of early stage design is a number of quick‐and‐dirty sketches capturing design inspirations, model variations and alternate viewpoints of a visual concept. We present SketchSoup, a workflow that allows designers to explore the design space induced by such sketches. We take an unstructured collection of drawings as input, along with a small number of user‐provided correspondences as input. We register them using a multi‐image matching algorithm, and present them as a 2D interpolation space. By morphing sketches in this space, our approach produces plausible visualizations of shape and viewpoint variations despite the presence of sketch distortions that would prevent standard camera calibration and 3D reconstruction. In addition, our interpolated sketches can serve as inspiration for further drawings, which feed back into the design space as additional image inputs. SketchSoup thus fills a significant gap in the early ideation stage of conceptual design by allowing designers to make better informed choices before proceeding to more expensive 3D modelling and prototyping. From a technical standpoint, we describe an end‐to‐end system that judiciously combines and adapts various image processing techniques to the drawing domain—where the images are dominated not by colour, shading and texture, but by sketchy stroke contours.SketchSoup takes an unstructured set of sketches as input, along with a small number of correspondences (shown as red dots) (a), registers the sketches using an iterative match‐warp algorithm harnessing matching consistency across images (b, top) and embeds the sketches into a 2D interpolation space based on their shape differences (b, bottom). Users can explore the interpolation space to generate novel sketches, which are generated by warping existing sketches into alignment(c, top), followed by spatially non‐uniform blending (c, bottom). These interpolated sketches can serve as underlay to inspire new concepts (d), which can in turn be integrated into the interpolation space to iteratively generate more designs (e). (Some sketches courtesy Mike Serafin.)A hallmark of early stage design is a number of quick‐and‐dirty sketches capturing design inspirations, model variations and alternate viewpoints of a visual concept. We present SketchSoup, a workflow that allows designers to explore the design space induced by such sketches. We take an unstructured collection of drawings as input, along with a small number of user‐provided correspondences as input. We register them using a multi‐image matching algorithm, and present them as a 2D interpolation space. By morphing sketches in this space, our approach produces plausible visualizations of shape and viewpoint variations despite the presence of sketch distortions that would prevent standard camera calibration and 3D reconstruction.
  • Item
    Category‐Specific Salient View Selection via Deep Convolutional Neural Networks
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Kim, Seong‐heum; Tai, Yu‐Wing; Lee, Joon‐Young; Park, Jaesik; Kweon, In So; Chen, Min and Zhang, Hao (Richard)
    In this paper, we present a new framework to determine up front orientations and detect salient views of 3D models. The salient viewpoint to human preferences is the most informative projection with correct upright orientation. Our method utilizes two Convolutional Neural Network (CNN) architectures to encode category‐specific information learnt from a large number of 3D shapes and 2D images on the web. Using the first CNN model with 3D voxel data, we generate a CNN shape feature to decide natural upright orientation of 3D objects. Once a 3D model is upright‐aligned, the front projection and salient views are scored by category recognition using the second CNN model. The second CNN is trained over popular photo collections from internet users. In order to model comfortable viewing angles of 3D models, a category‐dependent prior is also learnt from the users. Our approach effectively combines category‐specific scores and classical evaluations to produce a data‐driven viewpoint saliency map. The best viewpoints from the method are quantitatively and qualitatively validated with more than 100 objects from 20 categories. Our thumbnail images of 3D models are the most favoured among those from different approaches.In this paper, we present a new framework to determine up front orientations and detect salient views of 3D models. The salient viewpoint to human preferences is the most informative projection with correct upright orientation. Our method utilizes two Convolutional Neural Network (CNN) architectures to encode category‐specific information learnt from a large number of 3D shapes and 2D images on the web. Using the first CNN model with 3D voxel data, we generate a CNN shape feature to decide natural upright orientation of 3D objects. Once a 3D model is upright‐aligned, the front projection and salient views are scored by category recognition using the second CNN model. The second CNN is trained over popular photo collections from internet users. In order to model comfortable viewing angles of 3D models, a category dependent prior is also learnt from the users. Our approach effectively combines category‐specific scores and classical evaluations to produce a data‐driven viewpoint saliency map. The best viewpoints from the method are quantitatively and qualitatively validated with more than 100 objects from 20 categories. Our thumbnail images of 3D models are the most favored among those from different approaches.
  • Item
    Ontology‐Based Representation and Modelling of Synthetic 3D Content: A State‐of‐the‐Art Review
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Flotyński, Jakub; Walczak, Krzysztof; Chen, Min and Zhang, Hao (Richard)
    An indispensable element of any practical 3D/VR/AR application is synthetic three‐dimensional (3D) content. Such content is characterized by a variety of features—geometry, structure, space, appearance, animation and behaviour—which makes the modelling of 3D content a much more complex, difficult and time‐consuming task than in the case of other types of content. One of the promising research directions aiming at simplification of modelling 3D content is the use of the semantic web approach. The formalism provided by semantic web techniques enables declarative knowledge‐based modelling of content based on ontologies. Such modelling can be conducted at different levels of abstraction, possibly domain‐specific, with inherent separation of concerns. The use of semantic web ontologies enables content representation independent of particular presentation platforms and facilitates indexing, searching and analysing content, thus contributing to increased content re‐usability. A range of approaches have been proposed to permit semantic representation and modelling of synthetic 3D content. These approaches differ in the methodologies and technologies used as well as their scope and application domains. This paper provides a review of the current state of the art in representation and modelling of 3D content based on semantic web ontologies, together with a classification, characterization and discussion of the particular approaches.An indispensable element of any practical 3D/VR/AR application is synthetic three‐dimensional (3D) content. Such content is characterized by a variety of features—geometry, structure, space, appearance, animation and behaviour—which makes the modelling of 3D content a much more complex, difficult and time‐consuming task than in the case of other types of content. One of the promising research directions aiming at simplification of modelling 3D content is the use of the semantic web approach.
  • Item
    Primal‐Dual Optimization for Fluids
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Inglis, T.; Eckert, M.‐L.; Gregson, J.; Thuerey, N.; Chen, Min and Zhang, Hao (Richard)
    We apply a novel optimization scheme from the image processing and machine learning areas, a fast Primal‐Dual method, to achieve controllable and realistic fluid simulations. While our method is generally applicable to many problems in fluid simulations, we focus on the two topics of fluid guiding and separating solid‐wall boundary conditions. Each problem is posed as an optimization problem and solved using our method, which contains acceleration schemes tailored to each problem. In fluid guiding, we are interested in partially guiding fluid motion to exert control while preserving fluid characteristics. With our method, we achieve explicit control over both large‐scale motions and small‐scale details which is valuable for many applications, such as level‐of‐detail adjustment (after running the coarse simulation), spatially varying guiding strength, domain modification, and resimulation with different fluid parameters. For the separating solid‐wall boundary conditions problem, our method effectively eliminates unrealistic artefacts of fluid crawling up solid walls and sticking to ceilings, requiring few changes to existing implementations. We demonstrate the fast convergence of our Primal‐Dual method with a variety of test cases for both model problems.We apply a novel optimization scheme from the image processing and machine learning areas, a fast Primal‐Dual method, to achieve controllable and realistic fluid simulations. While our method is generally applicable to many problems in fluid simulations, we focus on the two topics of fluid guiding and separating solid‐wall boundary conditions. Each problem is posed as an optimization problem and solved using our method, which contains acceleration schemes tailored to each problem.
  • Item
    Distributed Optimization Framework for Shadow Removal in Multi‐Projection Systems
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Tsukamoto, J.; Iwai, D.; Kashima, K.; Chen, Min and Zhang, Hao (Richard)
    This paper proposes a novel shadow removal technique for cooperative projection system based on spatiotemporal prediction. In our previous work, we proposed a distributed feedback algorithm, which is implementable in cooperative projection environments subject to data transfer constraints between components. A weakness of this scheme is that the compensation is conducted in each pixel independently. As a result, spatiotemporal information of the environmental change cannot be utilized even if it is available. In view of this, we specifically investigate the situation where some of the projectors are occluded by a moving object whose one‐frame‐ahead behaviour is predictable. In order to remove the resulting shadow, we propose a novel error propagating scheme that is still implementable in a distributed manner and enables us to incorporate the prediction information of the obstacle. It is demonstrated theoretically and experimentally that the proposed method significantly improves the shadow removal performance in comparison to the previous work.This paper proposes a novel shadow removal technique for cooperative projection system based on spatiotemporal prediction. In our previous work, we proposed a distributed feedback algorithm, which is implementable in cooperative projection environments subject to data transfer constraints between components. A weakness of this scheme is that the compensation is conducted in each pixel independently. As a result, spatiotemporal information of the environmental change cannot be utilized even if it is available. In view of this, we specifically investigate the situation where some of projectors are occluded by a moving object whose one‐frame‐ahead behaviour is predictable.
  • Item
    Convolutional Sparse Coding for Capturing High‐Speed Video Content
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Serrano, Ana; Garces, Elena; Masia, Belen; Gutierrez, Diego; Chen, Min and Zhang, Hao (Richard)
    Video capture is limited by the trade‐off between spatial and temporal resolution: when capturing videos of high temporal resolution, the spatial resolution decreases due to bandwidth limitations in the capture system. Achieving both high spatial temporal resolution is only possible with highly specialized and very expensive hardware, and even then the same basic trade‐off remains. The recent introduction of compressive sensing and sparse reconstruction techniques allows for the capture of high‐speed video, by coding the temporal information in a single frame, and then reconstructing the full video sequence from this single‐coded image and a trained dictionary of image patches. In this paper, we first analyse this approach, and find insights that help improve the quality of the reconstructed videos. We then introduce a novel technique, based on (CSC), and show how it outperforms the state‐of‐the‐art, patch‐based approach in terms of flexibility and efficiency, due to the convolutional nature of its filter banks. The key idea for CSC high‐speed video acquisition is extending the basic formulation by imposing an additional constraint in the temporal dimension, which enforces sparsity of the first‐order derivatives over time.Video capture is limited by the trade‐off between spatial and temporal resolution: when capturing videos of high temporal resolution, the spatial resolution decreases due to bandwidth limitations in the capture system. Achieving both high spatial and temporal resolution is only possible with highly specialized and very expensive hardware, and even then the same basic trade‐off remains. .
  • Item
    NeuroLens: Data‐Driven Camera Lens Simulation Using Neural Networks
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zheng, Quan; Zheng, Changwen; Chen, Min and Zhang, Hao (Richard)
    Rendering with full lens model can offer images with photorealistic lens effects, but it leads to high computational costs. This paper proposes a novel camera lens model, NeuroLens, to emulate the imaging of real camera lenses through a data‐driven approach. The mapping of image formation in a camera lens is formulated as imaging regression functions (IRFs), which map input rays to output rays. IRFs are approximated with neural networks, which compactly represent the imaging properties and support parallel evaluation on a graphics processing unit (GPU). To effectively represent spatially varying imaging properties of a camera lens, the input space spanned by incident rays is subdivided into multiple subspaces and each subspace is fitted with a separate IRF. To further raise the evaluation accuracy, a set of neural networks is trained for each IRF and the output is calculated as the average output of the set. The effectiveness of the NeuroLens is demonstrated by fitting a wide range of real camera lenses. Experimental results show that it provides higher imaging accuracy in comparison to state‐of‐the‐art camera lens models, while maintaining the high efficiency for processing camera rays.Camera lens models are indispensable components of three‐dimensional graphics. Rendering with full lens model offers images with photorealistic lens effects, but it leads to high computational costs. This paper proposes a camera lens model, NeuroLens, to emulate real camera lenses through a data‐driven approach. The mapping of image formation in a camera lens is formulated as imaging regression functions (IRFs). IRFs are approximated with neural networks, which compactly represent the imaging properties and support parallel evaluation on a GPU. To represent spatially varying imaging properties, the input space spanned by incident rays is subdivided, and each subspace is locally fitted with a separate IRF.
  • Item
    Tree Branch Level of Detail Models for Forest Navigation
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhang, Xiaopeng; Bao, Guanbo; Meng, Weiliang; Jaeger, Marc; Li, Hongjun; Deussen, Oliver; Chen, Baoquan; Chen, Min and Zhang, Hao (Richard)
    We present a level of detail (LOD) method designed for tree branches. It can be combined with methods for processing tree foliage to facilitate navigation through large virtual forests. Starting from a skeletal representation of a tree, we fit polygon meshes of various densities to the skeleton while the mesh density is adjusted according to the required visual fidelity. For distant models, these branch meshes are gradually replaced with semi‐transparent lines until the tree recedes to a few lines. Construction of these complete LOD models is guided by error metrics to ensure smooth transitions between adjacent LOD models. We then present an instancing technique for discrete LOD branch models, consisting of polygon meshes plus semi‐transparent lines. Line models with different transparencies are instanced on the GPU by merging multiple tree samples into a single model. Our technique reduces the number of draw calls in GPU and increases rendering performance. Our experiments demonstrate that large‐scale forest scenes can be rendered with excellent detail and shadows in real time.We present a level of detail (LOD) method designed for tree branches. It can be combined with methods for processing tree foliage to facilitate navigation through large virtual forests. Starting from a skeletal representation of a tree, we fit polygon meshes of various densities to the skeleton while the mesh density is adjusted according to the required visual fidelity. For distant models, these branch meshes are gradually replaced with semi‐transparent lines until the tree recedes to a few lines. Construction of these complete LOD models is guided by error metrics to ensure smooth transitions between adjacent LOD models. We then present an instancing technique for discrete LOD branch models, consisting of polygon meshes plus semi‐transparent lines.
  • Item
    Multi‐Variate Gaussian‐Based Inverse Kinematics
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Huang, Jing; Wang, Qi; Fratarcangeli, Marco; Yan, Ke; Pelachaud, Catherine; Chen, Min and Zhang, Hao (Richard)
    Inverse kinematics (IK) equations are usually solved through approximated linearizations or heuristics. These methods lead to character animations that are unnatural looking or unstable because they do not consider both the motion coherence and limits of human joints. In this paper, we present a method based on the formulation of multi‐variate Gaussian distribution models (MGDMs), which precisely specify the soft joint constraints of a kinematic skeleton. Each distribution model is described by a covariance matrix and a mean vector representing both the joint limits and the coherence of motion of different limbs. The MGDMs are automatically learned from the motion capture data in a fast and unsupervised process. When the character is animated or posed, a Gaussian process synthesizes a new MGDM for each different vector of target positions, and the corresponding objective function is solved with Jacobian‐based IK. This makes our method practical to use and easy to insert into pre‐existing animation pipelines. Compared with previous works, our method is more stable and more precise, while also satisfying the anatomical constraints of human limbs. Our method leads to natural and realistic results without sacrificing real‐time performance.Inverse kinematics (IK) equations are usually solved through approximated linearizations or heuristics. These methods lead to character animations that are unnatural looking or unstable because they do not consider both the motion coherence and limits of human joints. In this paper, we present a method based on the formulation of multi‐variate Gaussian distribution models (MGDMs), which precisely specify the soft joint constraints of a kinematic skeleton. Each distribution model is described by a covariance matrix and a mean vector representing both the joint limits and the coherence of motion of different limbs.
  • Item
    Deformation Grammars: Hierarchical Constraint Preservation Under Deformation
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Vimont, Ulysse; Rohmer, Damien; Begault, Antoine; Cani, Marie‐Paule; Chen, Min and Zhang, Hao (Richard)
    Deformation grammars are a novel procedural framework enabling to sculpt hierarchical 3D models in an object‐dependent manner. They process object deformations as symbols thanks to user‐defined interpretation rules. We use them to define hierarchical deformation behaviours tailored for each model, and enabling any sculpting gesture to be interpreted as some adapted constraint‐preserving deformation. A variety of object‐specific constraints can be enforced using this framework, such as maintaining distributions of subparts, avoiding self‐penetrations or meeting semantic‐based user‐defined rules. The operations used to maintain constraints are kept transparent to the user, enabling them to focus on their design. We demonstrate the feasibility and the versatility of this approach on a variety of examples, implemented within an interactive sculpting system.Deformation grammars are a novel procedural framework enabling to sculpt hierarchical 3D models in an object‐dependent manner. They process object deformations as symbols thanks to user‐defined interpretation rules. We use them to define hierarchical deformation behaviours tailored for each model,.
  • Item
    Detail‐Preserving Explicit Mesh Projection and Topology Matching for Particle‐Based Fluids
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Dagenais, F.; Gagnon, J.; Paquette, E.; Chen, Min and Zhang, Hao (Richard)
    We propose a new explicit surface tracking approach for particle‐based fluid simulations. Our goal is to advect and update a highly detailed surface, while only computing a coarse simulation. Current explicit surface methods lose surface details when projecting on the isosurface of an implicit function built from particles. Our approach uses a detail‐preserving projection, based on a signed distance field, to prevent the divergence of the explicit surface without losing its initial details. Furthermore, we introduce a novel topology matching stage that corrects the topology of the explicit surface based on the topology of an implicit function. To that end, we introduce an optimization approach to update our explicit mesh signed distance field before remeshing. Our approach is successfully used to preserve the surface details of melting and highly viscous objects, and shown to be stable by handling complex cases involving multiple topological changes. Compared to the computation of a high‐resolution simulation, using our approach with a coarse fluid simulation significantly reduces the computation time and improves the quality of the resulting surface.We propose a new explicit surface tracking approach for particle‐based fluid simulations. Our goal is to advect and update a highly detailed surface, while only computing a coarse simulation. Current explicit surface methods lose surface details when projecting on the isosurface of an implicit function built from particles. Our approach uses a detail‐preserving projection, based on a signed distance field, to prevent the divergence of the explicit surface without losing its initial details. Furthermore, we introduce a novel topology matching stage that corrects the topology of the explicit surface based on the topology of an implicit function. To that end, we introduce an optimization approach to update our explicit mesh signed distance field before remeshing.
  • Item
    The State of the Art in Integrating Machine Learning into Visual Analytics
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Endert, A.; Ribarsky, W.; Turkay, C.; Wong, B.L. William; Nabney, I.; Blanco, I. Díaz; Rossi, F.; Chen, Min and Zhang, Hao (Richard)
    Visual analytics systems combine machine learning or other analytic techniques with interactive data visualization to promote sensemaking and analytical reasoning. It is through such techniques that people can make sense of large, complex data. While progress has been made, the tactful combination of machine learning and data visualization is still under‐explored. This state‐of‐the‐art report presents a summary of the progress that has been made by highlighting and synthesizing select research advances. Further, it presents opportunities and challenges to enhance the synergy between machine learning and visual analytics for impactful future research directions.Visual analytics systems combine machine learning or other analytic techniques with interactive data visualization to promote sensemaking and analytical reasoning. It is through such techniques that people can make sense of large, complex data. While progress has been made, the tactful combination of machine learning and data visualization is still under‐explored. This state‐of‐the‐art report presents a summary of the progress that has been made by highlighting and synthesizing select research advances. Further, it presents opportunities and challenges to enhance the synergy between machine learning and visual analytics for impactful future research directions.
  • Item
    Efficient and Reliable Self‐Collision Culling Using Unprojected Normal Cones
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Wang, Tongtong; Liu, Zhihua; Tang, Min; Tong, Ruofeng; Manocha, Dinesh; Chen, Min and Zhang, Hao (Richard)
    We present an efficient and accurate algorithm for self‐collision detection in deformable models. Our approach can perform discrete and continuous collision queries on triangulated meshes. We present a simple and linear time algorithm to perform the normal cone test using the unprojected 3D vertices, which reduces to a sequence point‐plane classification tests. Moreover, we present a hierarchical traversal scheme that can significantly reduce the number of normal cone tests and the memory overhead using front‐based normal cone culling. The overall algorithm can reliably detect all (self) collisions in models composed of hundreds of thousands of triangles. We observe considerable performance improvement over prior continuous collision detection algorithms.We present an efficient and accurate algorithm for self‐collision detection in deformable models. Our approach can perform discrete and continuous collision queries on triangulated meshes. We present a simple and linear time algorithm to perform the normal cone test using the unprojected 3D vertices, which reduces to a sequence point‐plane classification tests. Moreover, we present a hierarchical traversal scheme that can significantly reduce the number of normal cone tests and the memory overhead using front‐based normal cone culling. The overall algorithm can reliably detect all (self) collisions in models composed of hundreds of thousands of triangles. We observe considerable performance improvement over prior continuous collision detection algorithms.
  • Item
    Tunable Robustness: An Artificial Contact Strategy with Virtual Actuator Control for Balance
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Silva, D. B.; Nunes, R. F.; Vidal, C. A.; Cavalcante‐Neto, J. B.; Kry, P. G.; Zordan, V. B.; Chen, Min and Zhang, Hao (Richard)
    Physically based characters have not yet received wide adoption in the entertainment industry because control remains both difficult and unreliable. Even with the incorporation of motion capture for reference, which adds believability, characters fail to be convincing in their appearance when the control is not robust. To address these issues, we propose a simple Jacobian transpose torque controller that employs virtual actuators to create a fast and reasonable tracking system for motion capture. We combine this controller with a novel approach we call the topple‐free foot strategy which conservatively applies artificial torques to the standing foot to produce a character that is capable of performing with arbitrary robustness. The system is both easy to implement and straightforward for the animator to adjust to the desired robustness, by considering the trade‐off between physical realism and stability. We showcase the benefit of our system with a wide variety of example simulations, including energetic motions with multiple support contact changes, such as capoeira, as well as an extension that highlights the approach coupled with a Simbicon controlled walker. With this work, we aim to advance the state‐of‐the‐art in the practical design for physically based characters that can employ unaltered reference motion (e.g. motion capture data) and directly adapt it to a simulated environment without the need for optimization or inverse dynamics.Physically based characters have not yet received wide adoption in the entertainment industry because control remains both difficult and unreliable. Even with the incorporation of motion capture for reference, which adds believability, characters fail to be convincing in their appearance when the control is not robust. To address these issues, we propose a simple Jacobian transpose torque controller that employs virtual actuators to create a fast and reasonable tracking system for motion capture. We combine this controller with a novel approach we call the topple‐free foot strategy which conservatively applies artificial torques to the standing foot to produce a character that is capable of performing with arbitrary robustness.
  • Item
    Enhancing Urban Façades via LiDAR‐Based Sculpting
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Peethambaran, Jiju; Wang, Ruisheng; Chen, Min and Zhang, Hao (Richard)
    Buildings with symmetrical façades are ubiquitous in urban landscapes and detailed models of these buildings enhance the visual realism of digital urban scenes. However, a vast majority of the existing urban building models in web‐based 3D maps such as Google earth are either less detailed or heavily rely on texturing to render the details. We present a new framework for enhancing the details of such coarse models, using the geometry and symmetry inferred from the light detection and ranging (LiDAR) scans and 2D templates. The user‐defined 2D templates, referred to as coded planar meshes (CPMs), encodes the geometry of the smallest repeating 3D structures of the façades via face codes. Our encoding scheme, take into account the directions, type as well as the offset distance of the sculpting to be applied at the respective locations on the coarse model. In our approach, LiDAR scan is registered with the coarse models taken from Google earth 3D or Bing maps 3D and decomposed into dominant planar segments (each representing the frontal or lateral walls of the building). The façade segments are then split into horizontal and vertical tiles using a weighted point count function defined over the window or door boundaries. This is followed by an automatic identification of CPM locations with the help of a template fitting algorithm that respects the alignment regularity as well as the inter‐element spacing on the façade layout. Finally, 3D boolean sculpting operations are applied over the boxes induced by CPMs and the coarse model, and a detailed 3D model is generated. The proposed framework is capable of modelling details even with occluded scans and enhances not only the frontal façades (facing to the streets) but also the lateral façades of the buildings. We demonstrate the potentials of the proposed framework by providing several examples of enhanced Google earth models and highlight the advantages of our method when designing photo‐realistic urban façades.Buildings with symmetrical façades are ubiquitous in urban landscapes and detailed models of these buildings enhance the visual realism of digital urban scenes. However, a vast majority of the existing urban building models in web‐based 3D maps such as Google earth are either less detailed or heavily rely on texturing to render the details. We present a new framework for enhancing the details of such coarse models, using the geometry and symmetry inferred from the light detection and ranging (LiDAR) scans and 2D templates. The user‐defined 2D templates, referred to as coded planar meshes (CPMs), encodes the geometry of the smallest repeating 3D structures of the façades via face codes.
  • Item
    Contracting Medial Surfaces Isotropically for Fast Extraction of Centred Curve Skeletons
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Li, Lei; Wang, Wencheng; Chen, Min and Zhang, Hao (Richard)
    Curve skeletons, which are a compact representation for three‐dimensional shapes, must be extracted such that they are high quality, centred and smooth. However, the centredness measurements in existing methods are expensive, lowering the extraction efficiency. Although some methods trade quality for acceleration, their generated low‐quality skeletons are not suitable for applications. In this paper, we present a method to quickly extract centred curve skeletons. It operates by contracting the medial surface isotropically to the locus of the centres of its maximal inscribed spheres, which are spheres that have their centres on the medial surface and cannot be further enlarged while remaining the boundary of their intersections with the medial surface composed of only the points on the sphere surfaces. Thus, the centred curve skeleton can be extracted conveniently. For fast extraction, we develop novel measures to quickly generate the medial surface and contract it layer by layer, with every layer contracted isotropically using spheres of equal radii to account for every part of the medial surface boundary. The experimental results show that we can stably extract curve skeletons with higher centredness and at much higher speeds than existing methods, even for noisy shapes.Curve skeletons, which are a compact representation for three‐dimensional shapes, must be extracted such that they are high quality, centred and smooth. However, the centredness measurements in existing methods are expensive, lowering the extraction efficiency. Although some methods trade quality for acceleration, their generated low‐quality skeletons are not suitable for applications. In this paper, we present a method to quickly extract centred curve skeletons. It operates by contracting the medial surface isotropically to the locus of the centres of its maximal inscribed spheres, which are spheres that have their centres on the medial surface and cannot be further enlarged while remaining the boundary of their intersections with the medial surface composed of only the points on the sphere surfaces.
  • Item
    Hexahedral Meshing With Varying Element Sizes
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Xu, Kaoji; Gao, Xifeng; Deng, Zhigang; Chen, Guoning; Chen, Min and Zhang, Hao (Richard)
    Hexahedral (or Hex‐) meshes are preferred in a number of scientific and engineering simulations and analyses due to their desired numerical properties. Recent state‐of‐the‐art techniques can generate high‐quality hex‐meshes. However, they typically produce hex‐meshes with uniform element sizes and thus may fail to preserve small‐scale features on the boundary surface. In this work, we present a new framework that enables users to generate hex‐meshes with varying element sizes so that small features will be filled with smaller and denser elements, while the transition from smaller elements to larger ones is smooth, compared to the octree‐based approach. This is achieved by first detecting regions of interest (ROIs) of small‐scale features. These ROIs are then magnified using the as‐rigid‐as‐possible deformation with either an automatically determined or a user‐specified scale factor. A hex‐mesh is then generated from the deformed mesh using existing approaches that produce hex‐meshes with uniform‐sized elements. This initial hex‐mesh is then mapped back to the original volume before magnification to adjust the element sizes in those ROIs. We have applied this framework to a variety of man‐made and natural models to demonstrate its effectiveness.Hexahedral (or Hex‐) meshes are preferred in a number of scientific and engineering simulations and analyses due to their desired numerical properties. Recent state‐of‐the‐art techniques can generate high‐quality hex‐meshes. However, they typically produce hex‐meshes with uniform element sizes and thus may fail to preserve small‐scale features on the boundary surface. In this work, we present a new framework that enables users to generate hex‐meshes with varying element sizes so that small features will be filled with smaller and denser elements, while the transition from smaller elements to larger ones is smooth, compared to the octree‐based approach.
  • Item
    Real‐Time Solar Exposure Simulation in Complex Cities
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Muñoz‐Pandiella, I.; Bosch, C.; Mérillou, N.; Pueyo, X.; Mérillou, S.; Chen, Min and Zhang, Hao (Richard)
    In urban design, estimating solar exposure on complex city models is crucial but existing solutions typically focus on simplified building models and are too demanding in terms of memory and computational time. In this paper, we propose an interactive technique that estimates solar exposure on detailed urban scenes. Given a directional exposure map computed over a given time period, we estimate the sky visibility factor that serves to evaluate the final exposure at each visible point. This is done using a screen‐space method based on a two‐scale approach, which is geometry independent and has low storage costs. Our method performs at interactive rates and is designer‐oriented. The proposed technique is relevant in architecture and sustainable building design as it provides tools to estimate the energy performance of buildings as well as weathering effects in urban environments.In urban design, estimating solar exposure on complex city models is crucial but existing solutions typically focus on simplified building models and are too demanding in terms of memory and computational time. In this paper, we propose an interactive technique that estimates solar exposure on detailed urban scenes. Given a directional exposure map computed over a given time period, we estimate the sky visibility factor that serves to evaluate the final exposure at each visible point. This is done using a screen‐space method based on a two‐scale approach, which is geometry independent and has low storage costs.
  • Item
    Partitioning Surfaces Into Quadrilateral Patches: A Survey
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Campen, M.; Chen, Min and Zhang, Hao (Richard)
    The efficient and practical representation and processing of geometrically or topologically complex shapes often demands a partitioning into simpler patches. Possibilities range from unstructured arrangements of arbitrarily shaped patches on the one end, to highly structured conforming networks of all‐quadrilateral patches on the other end of the spectrum. Due to its regularity, this latter extreme of conforming partitions with quadrilateral patches, called quad layouts, is most beneficial in many application scenarios, for instance enabling the use of tensor‐product representations based on splines or Bézier patches, grid‐based multi‐resolution techniques and discrete pixel‐based map representations. However, this type of partition is also most complicated to create due to the strict inherent structural restrictions. Traditionally often performed manually in a tedious and demanding process, research in computer graphics and geometry processing has led to a number of computer‐assisted, semi‐automatic, as well as fully automatic approaches to address this problem more efficiently. This survey provides a detailed discussion of this range of methods, treats their strengths and weaknesses and outlines open problems in this field of research.The efficient and practical representation and processing of geometrically or topologically complex shapes often demands a partitioning into simpler patches. Possibilities range from unstructured arrangements of arbitrarily shaped patches on the one end, to highly structured conforming networks of all‐quadrilateral patches on the other end of the spectrum. Due to its regularity, this latter extreme of conforming partitions with quadrilateral patches, called quad layouts, is most beneficial in many application scenarios, for instance enabling the use of tensor‐product representations based on NURBS or Bézier patches, grid‐based multi‐resolution techniques and discrete pixel‐based map representations.
  • Item
    Intrinsic Light Field Images
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Garces, Elena; Echevarria, Jose I.; Zhang, Wen; Wu, Hongzhi; Zhou, Kun; Gutierrez, Diego; Chen, Min and Zhang, Hao (Richard)
    We present a method to automatically decompose a light field into its intrinsic shading and albedo components. Contrary to previous work targeted to two‐dimensional (2D) single images and videos, a light field is a 4D structure that captures non‐integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state‐of‐the‐art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherence. For efficiency, we extend Retinex theory, working on the gradient domain, where new albedo and occlusion terms are introduced. Results show that our method provides 4D intrinsic decompositions difficult to achieve with previous state‐of‐the‐art algorithms. We further provide a comprehensive analysis and comparisons with existing intrinsic image/video decomposition methods on light field images.We present a method to automatically decompose a into its intrinsic shading and albedo components. Contrary to previous work targeted to two‐dimensional (2D) single images and videos, a light field is a 4D structure that captures non‐integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state‐of‐the‐art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherence.
  • Item
    Noise Reduction on G‐Buffers for Monte Carlo Filtering
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Moon, Bochang; Iglesias‐Guitian, Jose A.; McDonagh, Steven; Mitchell, Kenny; Chen, Min and Zhang, Hao (Richard)
    We propose a novel pre‐filtering method that reduces the noise introduced by depth‐of‐field and motion blur effects in geometric buffers (G‐buffers) such as texture, normal and depth images. Our pre‐filtering uses world positions and their variances to effectively remove high‐frequency noise while carefully preserving high‐frequency edges in the G‐buffers. We design a new anisotropic filter based on a per‐pixel covariance matrix of world position samples. A general error estimator, Stein's unbiased risk estimator, is then applied to estimate the optimal trade‐off between the bias and variance of pre‐filtered results. We have demonstrated that our pre‐filtering improves the results of existing filtering methods numerically and visually for challenging scenes where depth‐of‐field and motion blurring introduce a significant amount of noise in the G‐buffers.We propose a novel pre‐filtering method that reduces the noise introduced by depth‐of‐field and motion blur effects in geometric buffers (G‐buffers) such as texture, normal and depth images. Our pre‐filtering uses world positions and their variances to effectively remove high‐frequency noise while carefully preserving high‐frequency edges in the G‐buffers. We design a new anisotropic filter based on a per‐pixel covariance matrix of world position samples. A general error estimator, Stein's unbiased risk estimator, is then applied to estimate the optimal trade‐off between the bias and variance of pre‐filtered results.
  • Item
    A Comprehensive Survey on Sampling‐Based Image Matting
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Yao, Guilin; Zhao, Zhijie; Liu, Shaohui; Chen, Min and Zhang, Hao (Richard)
    Sampling‐based image matting is currently playing a significant role and showing great further development potentials in image matting. However, the consequent survey articles and detailed classifications are still rare in the field of corresponding research. Furthermore, besides sampling strategies, most of the sampling‐based matting algorithms apply additional operations which actually conceal their real sampling performances. To inspire further improvements and new work, this paper makes a comprehensive survey on sampling‐based matting in the following five aspects: (i) Only the sampling step is initially preserved in the matting process to generate the final alpha results and make comparisons. (ii) Four basic categories including eight detailed classes for sampling‐based matting are presented, which are combined to generate the common sampling‐based matting algorithms. (iii) Each category including two classes is analysed and experimented independently on their advantages and disadvantages. (iv) Additional operations, including sampling weight, settling manner, complement and pre‐ and post‐processing, are sequentially analysed and added into sampling. Besides, the result and effect of each operation are also presented. (v) A pure sampling comparison framework is strongly recommended in future work.Sampling‐based image matting is currently playing a significant role and showing great further development potentials in image matting. However, the consequent survey articles and detailed classifications are still rare in the field of corresponding research. Furthermore, besides sampling strategies, most of the sampling‐based matting algorithms apply additional operations which actually conceal their real sampling performances. To inspire further improvements and new work, this paper makes a comprehensive survey on sampling‐based matting in the following five aspects: (i) Only the sampling step is initially preserved in the matting process to generate the final alpha results and make comparisons. (ii) Four basic categories including eight detailed classes for sampling‐based matting are presented, which are combined to generate the common sampling‐based matting algorithms. (iii) Each category including two classes is analysed and experimented independently on their advantages and disadvantages. (iv) Additional operations, including sampling weight, settling manner, complement and pre‐ and post‐processing, are sequentially analysed and added into sampling. Besides, the result and effect of each operation are also presented. (v) A pure sampling comparison framework is strongly recommended in future work.
  • Item
    Geometric Detection Algorithms for Cavities on Protein Surfaces in Molecular Graphics: A Survey
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Simões, Tiago; Lopes, Daniel; Dias, Sérgio; Fernandes, Francisco; Pereira, João; Jorge, Joaquim; Bajaj, Chandrajit; Gomes, Abel; Chen, Min and Zhang, Hao (Richard)
    Detecting and analysing protein cavities provides significant information about active sites for biological processes (e.g. protein–protein or protein–ligand binding) in molecular graphics and modelling. Using the three‐dimensional (3D) structure of a given protein (i.e. atom types and their locations in 3D) as retrieved from a PDB (Protein Data Bank) file, it is now computationally viable to determine a description of these cavities. Such cavities correspond to pockets, clefts, invaginations, voids, tunnels, channels and grooves on the surface of a given protein. In this work, we survey the literature on protein cavity computation and classify algorithmic approaches into three categories: evolution‐based, energy‐based and geometry‐based. Our survey focuses on geometric algorithms, whose taxonomy is extended to include not only sphere‐, grid‐ and tessellation‐based methods, but also surface‐based, hybrid geometric, consensus and time‐varying methods. Finally, we detail those techniques that have been customized for GPU (graphics processing unit) computing.Detecting and analysing protein cavities provides significant information about active sites for biological processes (e.g. protein–protein or protein–ligand binding) in molecular graphics and modelling. Using the three‐dimensional (3D) structure of a given protein (i.e. atom types and their locations in 3D) as retrieved from a PDB (Protein Data Bank) file, it is now computationally viable to determine a description of these cavities. Such cavities correspond to pockets, clefts, invaginations, voids, tunnels, channels and grooves on the surface of a given protein. In this work, we survey the literature on protein cavity computation and classify algorithmic approaches into three categories: evolution‐based, energy‐based and geometry‐based.
  • Item
    Approximating Planar Conformal Maps Using Regular Polygonal Meshes
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Renjie; Gotsman, Craig; Chen, Min and Zhang, Hao (Richard)
    Continuous conformal maps are typically approximated numerically using a triangle mesh which discretizes the plane. Computing a conformal map subject to user‐provided constraints then reduces to a sparse linear system, minimizing a quadratic ‘conformal energy’. We address the more general case of non‐triangular elements, and provide a complete analysis of the case where the plane is discretized using a mesh of regular polygons, e.g. equilateral triangles, squares and hexagons, whose interiors are mapped using barycentric coordinate functions. We demonstrate experimentally that faster convergence to continuous conformal maps may be obtained this way. We provide a formulation of the problem and its solution using complex number algebra, significantly simplifying the notation. We examine a number of common barycentric coordinate functions and demonstrate that superior approximation to harmonic coordinates of a polygon are achieved by the Moving Least Squares coordinates. We also provide a simple iterative algorithm to invert barycentric maps of regular polygon meshes, allowing to apply them in practical applications, e.g. for texture mapping.Continuous conformal maps are typically approximated numerically using a triangle mesh which discretizes the plane. Computing a conformal map subject to user‐provided constraints then reduces to a sparse linear system, minimizing a quadratic ‘conformal energy’. We address the more general case of non‐triangular elements, and provide a complete analysis of the case where the plane is discretized using a mesh of regular polygons, e.g. equilateral triangles, squares and hexagons, whose interiors are mapped using barycentric coordinate functions. We demonstrate experimentally that faster convergence to continuous conformal maps may be obtained this way. We examine a number of common barycentric coordinate functions and demonstrate that superior approximation to harmonic coordinates of a polygon are achieved by the Moving Least Squares coordinates. We also provide a simple iterative algorithm to invert barycentric maps of regular polygon meshes, allowing to apply them in practical applications, e.g. for texture mapping.
  • Item
    Regularized Pointwise Map Recovery from Functional Correspondence
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Rodolà, E.; Moeller, M.; Cremers, D.; Chen, Min and Zhang, Hao (Richard)
    The concept of using functional maps for representing dense correspondences between deformable shapes has proven to be extremely effective in many applications. However, despite the impact of this framework, the problem of recovering the point‐to‐point correspondence from a given functional map has received surprisingly little interest. In this paper, we analyse the aforementioned problem and propose a novel method for reconstructing pointwise correspondences from a given functional map. The proposed algorithm phrases the matching problem as a regularized alignment problem of the spectral embeddings of the two shapes. Opposed to established methods, our approach does not require the input shapes to be nearly‐isometric, and easily extends to recovering the point‐to‐point correspondence in part‐to‐whole shape matching problems. Our numerical experiments demonstrate that the proposed approach leads to a significant improvement in accuracy in several challenging cases.The concept of using functional maps for representing dense correspondences between deformable shapes has proven to be extremely effective in many applications. However, despite the impact of this framework, the problem of recovering the point‐to‐point correspondence from a given functional map has received surprisingly little interest. In this paper, we analyse the aforementioned problem and propose a novel method for reconstructing pointwise correspondences from a given functional map. The proposed algorithm phrases the matching problem as a regularized alignment problem of the spectral embeddings of the two shapes. Opposed to established methods, our approach does not require the input shapes to be nearly‐isometric, and easily extends to recovering the point‐to‐point correspondence in part‐to‐whole shape matching problems.
  • Item
    A Stochastic Film Grain Model for Resolution‐Independent Rendering
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Newson, A.; Delon, J.; Galerne, B.; Chen, Min and Zhang, Hao (Richard)
    The realistic synthesis and rendering of film grain is a crucial goal for many amateur and professional photographers and film‐makers whose artistic works require the authentic feel of analogue photography. The objective of this work is to propose an algorithm that reproduces the visual aspect of film grain texture on any digital image. Previous approaches to this problem either propose unrealistic models or simply blend scanned images of film grain with the digital image, in which case the result is inevitably limited by the quality and resolution of the initial scan. In this work, we introduce a stochastic model to approximate the physical reality of film grain, and propose a resolution‐free rendering algorithm to simulate realistic film grain for any digital input image. By varying the parameters of this model, we can achieve a wide range of grain types. We demonstrate this by comparing our results with film grain examples from dedicated software, and show that our rendering results closely resemble these real film emulsions. In addition to realistic grain rendering, our resolution‐free algorithm allows for any desired zoom factor, even down to the scale of the microscopic grains themselves.The realistic synthesis and rendering of film grain is a crucial goal for many amateur and professional photographers and film‐makers whose artistic works require the authentic feel of analogue photography. The objective of this work is to propose an algorithm that reproduces the visual aspect of film grain texture on any digital image. Previous approaches to this problem either propose unrealistic models or simply blend scanned images of film grain with the digital image, in which case the result is inevitably limited by the quality and resolution of the initial scan. In this work, we introduce a stochastic model to approximate the physical reality of film grain, and propose a resolution‐free rendering algorithm to simulate realistic film grain for any digital input image. By varying the parameters of this model, we can achieve a wide range of grain types.
  • Item
    Reviewers
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Min and Zhang, Hao (Richard)