SCA 03: Eurographics/SIGGRAPH Symposium on Computer Animation

Permanent URI for this collection


A Practical Dynamics System

Kacic-Alesic, Zoran
Nordenstam, Marcus
Bullock, David

Interactive Physically Based Solid Dynamics

Hauth, M.
Groß, J.
Straßer, W.

Simulation of Clothing with Folds and Wrinkles

Bridson, R.
Marino, S.
Fedkiw, R.

Feel the 'Fabric': An Audio-Haptic Interface

Huang, G.
Metaxas, D.
Govindaraj, M.

Estimating Cloth Simulation Parameters from Video

Bhat, Kiran S.
Twigg, Christopher D.
Hodgins, Jessica K.
Khosla, Pradeep K.
Popovic, Zoran
Seitz, Steven M.

Finite Volume Methods for the Simulation of Skeletal Muscle

Teran, J.
Blemker, S.
Hing, V. Ng Thow
Fedkiw, R.

Blowing in the Wind

Wei, Xiaoming
Zhao, Ye
Fan, Zhe
Li, Wei
Yoakum-Stover, Suzanne
Kaufman, Arie

Discrete Shells

Grinspun, Eitan
Hirani, Anil N.
Desbrun, Mathieu
Schröder, Peter

Visual Simulation of Ice Crystal Growth

Kim, Theodore
Lin, Ming C.

Construction and Animation of Anatomically Based Human Hand Models

Albrecht, Irene
Haber, Jörg
Seidel, Hans-Peter

Synthesizing Animatable Body Models with Parameterized Shape Modifications

Seo, Hyewon
Cordier, Frederic
Magnenat-Thalmann, Nadia

Handrix: Animating the Human Hand

Koura, George El
Singh, Karan

Dynapack: Space-Time compression of the 3D animations of triangle meshes with fixed connectivity

Ibarria, Lawrence
Rossignac, Jarek

Geometry Videos: A New Representation for 3D Animations

Briceño, Hector M.
Sander, Pedro V.
McMillan, Leonard
Gortler, Steven
Hoppe, Hugues

Particle-Based Fluid Simulation for Interactive Applications

Müller, Matthias
Charypar, David
Gross, Markus

Advected Textures

Neyret, Fabrice

A Real-Time Cloud Modeling, Rendering, and Animation System

Schpok, Joshua
Simons, Joseph
Ebert, David S.
Hansen, Charles

An Example-Based Approach for Facial Expression Cloning

Pyun, Hyewon
Kim, Yejin
Chae, Wonseok
Kang, Hyung Woo
Shin, Sung Yong

Learning Controls for Blend Shape Based Realistic Facial Animation

Joshi, Pushkar
Tien, Wen C.
Desbrun, Mathieu
Pighin, Frédéric

Geometry-Driven Photorealistic Facial Expression Synthesis

Zhang, Qingshan
Liu, Zicheng
Guo, Baining
Shum, Harry

Vision-based Control of 3D Facial Animation

Chai, Jin-xiang
Xiao, Jing
Hodgins, Jessica

Flexible Automatic Motion Blending with Registration Curves

Kovar, Lucas
Gleicher, Michael

Adaptive Wisp Tree - a multiresolution control structure for simulating dynamic clustering in hair motion

Bertails, F.
Kim, T-Y.
Cani, M-P.
Neumann, U.

Aesthetic Edits For Character Animation

Neff, Michael
Fiume, Eugene

Unsupervised Learning for Speech Motion Editing

Cao, Yong
Faloutsos, Petros
Pighin, Frédéric

An Evaluation of a Cost Metric for Selecting Transitions between Motion Segments

Wang, Jing
Bodenheimer, Bobby

A 2-Stages Locomotion Planner for Digital Actors

Pettré, Julien
Laumond, Jean-Paul
Siméon, Thierry

Trackable Surfaces

Guskov, Igor
Klibanov, Sergey
Bryant, Benjamin

A Scenario Language to orchestrate Virtual World Evolution

Devillers, Frédéric
Donikian, Stéphane

Constrained Animation of Flocks

Anderson, Matt
McDaniel, Eric
Chenney, Stephen

Mapping optical motion capture data to skeletal motion using a physical model

Zordan, Victor B.
Horst, Nicholas C. Van Der

Generating Flying Creatures using Body-Brain Co-Evolution

Shim, Yoon-Sik
Kim, Chang-Hun

On Creating Animated Presentations

Zongker, Douglas E.
Salesin, David H.

A Sketching Interface for Articulated Figure Animation

Davis, James
Agrawala, Maneesh
Chuang, Erika
Popovic, Zoran
Salesin, David

Interactive Control of Component-based Morphing

Zhao, Yonghong
Ong, Hong-Yang
Tan, Tiow-Seng
Xiao, Yongguan

Stylizing Motion with Drawings

Li, Yin
Gleicher, Michael
Xu, Ying-Qing
Shum, Heung-Yeung

FootSee: an Interactive Animation System

Yin, KangKang
Pai, Dinesh K.

Sound-by-Numbers: Motion-Driven Sound Synthesis

Cardle, M.
Brooks, S.
Bar-Joseph, Z.
Robinson, P.


BibTeX (SCA 03: Eurographics/SIGGRAPH Symposium on Computer Animation)
@inproceedings{
:10.2312/SCA03/007-016,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
A Practical Dynamics System}},
author = {
Kacic-Alesic, Zoran
and
Nordenstam, Marcus
and
Bullock, David
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/007-016}
}
@inproceedings{
:10.2312/SCA03/017-027,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Interactive Physically Based Solid Dynamics}},
author = {
Hauth, M.
and
Groß, J.
and
Straßer, W.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/017-027}
}
@inproceedings{
:10.2312/SCA03/028-036,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Simulation of Clothing with Folds and Wrinkles}},
author = {
Bridson, R.
and
Marino, S.
and
Fedkiw, R.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/028-036}
}
@inproceedings{
:10.2312/SCA03/052-061,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Feel the 'Fabric': An Audio-Haptic Interface}},
author = {
Huang, G.
and
Metaxas, D.
and
Govindaraj, M.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/052-061}
}
@inproceedings{
:10.2312/SCA03/037-051,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Estimating Cloth Simulation Parameters from Video}},
author = {
Bhat, Kiran S.
and
Twigg, Christopher D.
and
Hodgins, Jessica K.
and
Khosla, Pradeep K.
and
Popovic, Zoran
and
Seitz, Steven M.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/037-051}
}
@inproceedings{
:10.2312/SCA03/068-074,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Finite Volume Methods for the Simulation of Skeletal Muscle}},
author = {
Teran, J.
and
Blemker, S.
and
Hing, V. Ng Thow
and
Fedkiw, R.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/068-074}
}
@inproceedings{
:10.2312/SCA03/075-085,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Blowing in the Wind}},
author = {
Wei, Xiaoming
and
Zhao, Ye
and
Fan, Zhe
and
Li, Wei
and
Yoakum-Stover, Suzanne
and
Kaufman, Arie
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/075-085}
}
@inproceedings{
:10.2312/SCA03/062-067,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Discrete Shells}},
author = {
Grinspun, Eitan
and
Hirani, Anil N.
and
Desbrun, Mathieu
and
Schröder, Peter
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/062-067}
}
@inproceedings{
:10.2312/SCA03/086-097,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Visual Simulation of Ice Crystal Growth}},
author = {
Kim, Theodore
and
Lin, Ming C.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/086-097}
}
@inproceedings{
:10.2312/SCA03/098-109,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Construction and Animation of Anatomically Based Human Hand Models}},
author = {
Albrecht, Irene
and
Haber, Jörg
and
Seidel, Hans-Peter
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/098-109}
}
@inproceedings{
:10.2312/SCA03/120-125,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Synthesizing Animatable Body Models with Parameterized Shape Modifications}},
author = {
Seo, Hyewon
and
Cordier, Frederic
and
Magnenat-Thalmann, Nadia
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/120-125}
}
@inproceedings{
:10.2312/SCA03/110-119,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Handrix: Animating the Human Hand}},
author = {
Koura, George El
and
Singh, Karan
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/110-119}
}
@inproceedings{
:10.2312/SCA03/126-135,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Dynapack: Space-Time compression of the 3D animations of triangle meshes with fixed connectivity}},
author = {
Ibarria, Lawrence
and
Rossignac, Jarek
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/126-135}
}
@inproceedings{
:10.2312/SCA03/136-146,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Geometry Videos: A New Representation for 3D Animations}},
author = {
Briceño, Hector M.
and
Sander, Pedro V.
and
McMillan, Leonard
and
Gortler, Steven
and
Hoppe, Hugues
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/136-146}
}
@inproceedings{
:10.2312/SCA03/154-159,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Particle-Based Fluid Simulation for Interactive Applications}},
author = {
Müller, Matthias
and
Charypar, David
and
Gross, Markus
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/154-159}
}
@inproceedings{
:10.2312/SCA03/147-153,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Advected Textures}},
author = {
Neyret, Fabrice
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/147-153}
}
@inproceedings{
:10.2312/SCA03/160-166,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
A Real-Time Cloud Modeling, Rendering, and Animation System}},
author = {
Schpok, Joshua
and
Simons, Joseph
and
Ebert, David S.
and
Hansen, Charles
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/160-166}
}
@inproceedings{
:10.2312/SCA03/167-176,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
An Example-Based Approach for Facial Expression Cloning}},
author = {
Pyun, Hyewon
and
Kim, Yejin
and
Chae, Wonseok
and
Kang, Hyung Woo
and
Shin, Sung Yong
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/167-176}
}
@inproceedings{
:10.2312/SCA03/187-192,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Learning Controls for Blend Shape Based Realistic Facial Animation}},
author = {
Joshi, Pushkar
and
Tien, Wen C.
and
Desbrun, Mathieu
and
Pighin, Frédéric
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/187-192}
}
@inproceedings{
:10.2312/SCA03/177-186,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Geometry-Driven Photorealistic Facial Expression Synthesis}},
author = {
Zhang, Qingshan
and
Liu, Zicheng
and
Guo, Baining
and
Shum, Harry
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/177-186}
}
@inproceedings{
:10.2312/SCA03/193-206,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Vision-based Control of 3D Facial Animation}},
author = {
Chai, Jin-xiang
and
Xiao, Jing
and
Hodgins, Jessica
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/193-206}
}
@inproceedings{
:10.2312/SCA03/214-224,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Flexible Automatic Motion Blending with Registration Curves}},
author = {
Kovar, Lucas
and
Gleicher, Michael
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/214-224}
}
@inproceedings{
:10.2312/SCA03/207-213,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Adaptive Wisp Tree - a multiresolution control structure for simulating dynamic clustering in hair motion}},
author = {
Bertails, F.
and
Kim, T-Y.
and
Cani, M-P.
and
Neumann, U.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/207-213}
}
@inproceedings{
:10.2312/SCA03/239-244,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Aesthetic Edits For Character Animation}},
author = {
Neff, Michael
and
Fiume, Eugene
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/239-244}
}
@inproceedings{
:10.2312/SCA03/225-231,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Unsupervised Learning for Speech Motion Editing}},
author = {
Cao, Yong
and
Faloutsos, Petros
and
Pighin, Frédéric
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/225-231}
}
@inproceedings{
:10.2312/SCA03/232-238,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
An Evaluation of a Cost Metric for Selecting Transitions between Motion Segments}},
author = {
Wang, Jing
and
Bodenheimer, Bobby
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/232-238}
}
@inproceedings{
:10.2312/SCA03/258-264,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
A 2-Stages Locomotion Planner for Digital Actors}},
author = {
Pettré, Julien
and
Laumond, Jean-Paul
and
Siméon, Thierry
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/258-264}
}
@inproceedings{
:10.2312/SCA03/251-257,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Trackable Surfaces}},
author = {
Guskov, Igor
and
Klibanov, Sergey
and
Bryant, Benjamin
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/251-257}
}
@inproceedings{
:10.2312/SCA03/265-275,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
A Scenario Language to orchestrate Virtual World Evolution}},
author = {
Devillers, Frédéric
and
Donikian, Stéphane
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/265-275}
}
@inproceedings{
:10.2312/SCA03/286-297,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Constrained Animation of Flocks}},
author = {
Anderson, Matt
and
McDaniel, Eric
and
Chenney, Stephen
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/286-297}
}
@inproceedings{
:10.2312/SCA03/245-250,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Mapping optical motion capture data to skeletal motion using a physical model}},
author = {
Zordan, Victor B.
and
Horst, Nicholas C. Van Der
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/245-250}
}
@inproceedings{
:10.2312/SCA03/276-285,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Generating Flying Creatures using Body-Brain Co-Evolution}},
author = {
Shim, Yoon-Sik
and
Kim, Chang-Hun
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/276-285}
}
@inproceedings{
:10.2312/SCA03/298-308,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
On Creating Animated Presentations}},
author = {
Zongker, Douglas E.
and
Salesin, David H.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/298-308}
}
@inproceedings{
:10.2312/SCA03/320-328,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
A Sketching Interface for Articulated Figure Animation}},
author = {
Davis, James
and
Agrawala, Maneesh
and
Chuang, Erika
and
Popovic, Zoran
and
Salesin, David
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/320-328}
}
@inproceedings{
:10.2312/SCA03/339-348,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Interactive Control of Component-based Morphing}},
author = {
Zhao, Yonghong
and
Ong, Hong-Yang
and
Tan, Tiow-Seng
and
Xiao, Yongguan
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/339-348}
}
@inproceedings{
:10.2312/SCA03/309-319,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Stylizing Motion with Drawings}},
author = {
Li, Yin
and
Gleicher, Michael
and
Xu, Ying-Qing
and
Shum, Heung-Yeung
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/309-319}
}
@inproceedings{
:10.2312/SCA03/329-338,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
FootSee: an Interactive Animation System}},
author = {
Yin, KangKang
and
Pai, Dinesh K.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/329-338}
}
@inproceedings{
:10.2312/SCA03/349-356,
booktitle = {
Symposium on Computer Animation},
editor = {
D. Breen and M. Lin
}, title = {{
Sound-by-Numbers: Motion-Driven Sound Synthesis}},
author = {
Cardle, M.
and
Brooks, S.
and
Bar-Joseph, Z.
and
Robinson, P.
}, year = {
2003},
publisher = {
The Eurographics Association},
ISSN = {1727-5288},
ISBN = {1-58113-659-5},
DOI = {
/10.2312/SCA03/349-356}
}

Browse

Recent Submissions

Now showing 1 - 38 of 38
  • Item
    A Practical Dynamics System
    (The Eurographics Association, 2003) Kacic-Alesic, Zoran; Nordenstam, Marcus; Bullock, David; D. Breen and M. Lin
    We present an effective production-proven dynamics system. It uses an explicit time differencing method that is efficient, reasonably accurate, conditionally stable, and above all simple to implement. We describe issues related to integration of physically based simulation techniques into an interactive animation system, present a high level description of the architecture of the system, report on techniques that work, and provide observations that may seem obvious, but only in retrospect. Applications include rigid and deformable body dynamics, particle dynamics, and at a basic level, hair and cloth simulation.
  • Item
    Interactive Physically Based Solid Dynamics
    (The Eurographics Association, 2003) Hauth, M.; Groß, J.; Straßer, W.; D. Breen and M. Lin
    The interactive simulation of deformable solids has become a major working area in Computer Graphics. We present a sophisticated material law, better suited for dynamical computations than the standard approaches. As an important example, it is employed to reproduce measured material data from biological soft tissue. We embed it into a state-of-the-art finite element setting employing an adaptive basis. For time integration the use of an explicit stabilized Runge-Kutta method is proposed.
  • Item
    Simulation of Clothing with Folds and Wrinkles
    (The Eurographics Association, 2003) Bridson, R.; Marino, S.; Fedkiw, R.; D. Breen and M. Lin
    Clothing is a fundamental part of a character's persona, a key storytelling tool used to convey an intended impression to the audience. Draping, folding, wrinkling, stretching, etc. all convey meaning, and thus each is carefully controlled when filming live actors. When making films with computer simulated cloth, these subtle but important elements must be captured. In this paper we present several methods essential to matching the behavior and look of clothing worn by digital stand-ins to their real world counterparts. Novel contributions include a mixed explicit/ implicit time integration scheme, a physically correct bending model with (potentially) nonzero rest angles for pre-shaping wrinkles, an interface forecasting technique that promotes the development of detail in contact regions, a post-processing method for treating cloth-character collisions that preserves folds and wrinkles, and a dynamic constraint mechanism that helps to control large scale folding. The common goal of all these techniques is to produce a cloth simulation with many folds and wrinkles improving the realism.
  • Item
    Feel the 'Fabric': An Audio-Haptic Interface
    (The Eurographics Association, 2003) Huang, G.; Metaxas, D.; Govindaraj, M.; D. Breen and M. Lin
    An objective fabric modeling system should convey not only the visual but also the haptic and audio sensory feedbacks to remote/internet users via an audio-haptic interface. In this paper we develop a fabric surface property modeling system consisting of a stylus based fabric characteristic sound modeling, and an audio-haptic interface. By using a stylus, people can perceive fabrics surface roughness, friction, and softness though not as precisely as with their bare fingers. The audio-haptic interface is intended to simulate the case of "feeling a virtually fixed fabric via a rigid stylus" by using the PHANToM haptic interface. We develop a DFFT based correlation-restoration method to model the surface roughness and friction coefficient of a fabric, and a physically based method to model the sound of a fabric when rubbed by a stylus. The audio-haptic interface, which renders synchronized auditory and haptic stimuli when the virtual stylus rubs on the surface of a virtual fabric, is developed in VC++6.0 by using OpenGL and the PHANToM GHOST SDK. We asked subjects to test our audio-haptic interface and they were able to differentiate the surface properties of virtual fabrics in the correct order. We show that the virtual fabric is a good modeling of the real counterpart.
  • Item
    Estimating Cloth Simulation Parameters from Video
    (The Eurographics Association, 2003) Bhat, Kiran S.; Twigg, Christopher D.; Hodgins, Jessica K.; Khosla, Pradeep K.; Popovic, Zoran; Seitz, Steven M.; D. Breen and M. Lin
    Cloth simulations are notoriously difficult to tune due to the many parameters that must be adjusted to achieve the look of a particular fabric. In this paper, we present an algorithm for estimating the parameters of a cloth simulation from video data of real fabric. A perceptually motivated metric based on matching between folds is used to compare video of real cloth with simulation. This metric compares two video sequences of cloth and returns a number that measures the differences in their folds. Simulated annealing is used to minimize the frame by frame error between the metric for a given simulation and the real-world footage. To estimate all the cloth parameters, we identify simple static and dynamic calibration experiments that use small swatches of the fabric. To demonstrate the power of this approach, we use our algorithm to find the parameters for four different fabrics. We show the match between the video footage and simulated motion on the calibration experiments, on new video sequences for the swatches, and on a simulation of a full skirt.
  • Item
    Finite Volume Methods for the Simulation of Skeletal Muscle
    (The Eurographics Association, 2003) Teran, J.; Blemker, S.; Hing, V. Ng Thow; Fedkiw, R.; D. Breen and M. Lin
    Since it relies on a geometrical rather than a variational framework, many find the finite volume method (FVM) more intuitive than the finite element method (FEM).We show that the FVM allows one to interpret the stress inside a tetrahedron as a simple 'multidimensional force' pushing on each face. Moreover, this interpretation leads to a heuristic method for calculating the force on each node, which is as simple to implement and comprehend as masses and springs. In the finite volume spirit, we also present a geometric rather than interpolating function definition of strain. We use the FVM and a quasi-incompressible, transversely isotropic, hyperelastic constitutive model to simulate contracting muscle tissue. B-spline solids are used to model fiber directions, and the muscle activation levels are derived from key frame animations.
  • Item
    Blowing in the Wind
    (The Eurographics Association, 2003) Wei, Xiaoming; Zhao, Ye; Fan, Zhe; Li, Wei; Yoakum-Stover, Suzanne; Kaufman, Arie; D. Breen and M. Lin
    We present an approach for simulating the natural dynamics that emerge from the coupling of a flow field to lightweight, mildly deformable objects immersed within it. We model the flow field using a Lattice Boltzmann Model (LBM) extended with a subgrid model and accelerate the computation on commodity graphics hardware to achieve real-time simulations. We demonstrate our approach using soap bubbles and a feather blown by wind fields, yet our approach is general enough to apply to other light-weight objects. The soap bubbles illustrate Fresnel reflection, reveal the dynamics of the unseen flow field in which they travel, and display spherical harmonics in their undulations. The free feather floats and flutters in response to lift and drag forces. Our single bubble simulation allows the user to directly interact with the wind field and thereby influence the dynamics in real time.
  • Item
    Discrete Shells
    (The Eurographics Association, 2003) Grinspun, Eitan; Hirani, Anil N.; Desbrun, Mathieu; Schröder, Peter; D. Breen and M. Lin
    In this paper we introduce a discrete shell model describing the behavior of thin flexible structures, such as hats, leaves, and aluminum cans, which are characterized by a curved undeformed configuration. Previously such models required complex continuum mechanics formulations and correspondingly complex algorithms. We show that a simple shell model can be derived geometrically for triangle meshes and implemented quickly by modifying a standard cloth simulator. Our technique convincingly simulates a variety of curved objects with materials ranging from paper to metal, as we demonstrate with several examples including a comparison of a real and simulated falling hat.
  • Item
    Visual Simulation of Ice Crystal Growth
    (The Eurographics Association, 2003) Kim, Theodore; Lin, Ming C.; D. Breen and M. Lin
    The beautiful, branching structure of ice is one of the most striking visual phenomena of the winter landscape. Yet there is little study about modeling this effect in computer graphics. In this paper, we present a novel approach for visual simulation of ice growth. We use a numerical simulation technique from computational physics, the "phase field method", and modify it to allow aesthetic manipulation of ice crystal growth. We present acceleration techniques to achieve interactive simulation performance, as well as a novel geometric sharpening algorithm that removes some of the smoothing artifacts from the implicit representation. We have successfully applied this approach to generate ice crystal growth on 3D object surfaces in several scenes.
  • Item
    Construction and Animation of Anatomically Based Human Hand Models
    (The Eurographics Association, 2003) Albrecht, Irene; Haber, Jörg; Seidel, Hans-Peter; D. Breen and M. Lin
    The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.
  • Item
    Synthesizing Animatable Body Models with Parameterized Shape Modifications
    (The Eurographics Association, 2003) Seo, Hyewon; Cordier, Frederic; Magnenat-Thalmann, Nadia; D. Breen and M. Lin
    Based on an existing modeller that can generate realistic and controllable whole-body models, we introduce our modifier synthesizer for obtaining higher level of manipulations of body models by using parameters such as fat percentage and hip-to-waist ratio. Users are assisted in automatically modifying an existing model by controlling the parameters provided. On any synthesized model, the underlying bone and skin structure is properly adjusted, so that the model remains completely animatable using the underlying skeleton. Based on statistical analysis of data models, we demonstrate the use of body attributes as parameters in controlling the shape modification of the body models while maintaining the distinctiveness of the individual as much as possible.
  • Item
    Handrix: Animating the Human Hand
    (The Eurographics Association, 2003) Koura, George El; Singh, Karan; D. Breen and M. Lin
    The human hand is a complex organ capable of both gross grasp and fine motor skills. Despite many successful high-level skeletal control techniques, animating realistic hand motion remains tedious and challenging. This paper presents research motivated by the complex finger positioning required to play musical instruments, such as the guitar. We first describe a data driven algorithm to add sympathetic finger motion to arbitrarily animated hands. We then present a procedural algorithm to generate the motion of the fretting hand playing a given musical passage on a guitar. The work here is aimed as a tool for music education and analysis. The contributions of this paper are a general architecture for the skeletal control of interdependent articulations performing multiple concurrent reaching tasks, and a procedural tool for musicians and animators that captures the motion complexity of guitar fingering.
  • Item
    Dynapack: Space-Time compression of the 3D animations of triangle meshes with fixed connectivity
    (The Eurographics Association, 2003) Ibarria, Lawrence; Rossignac, Jarek; D. Breen and M. Lin
    Dynapack exploits space-time coherence to compress the consecutive frames of the 3D animations of triangle meshes of constant connectivity. Instead of compressing each frame independently (space-only compression) or compressing the trajectory of each vertex independently (time-only compression), we predict the position of each vertex v of frame f from three of its neighbors in frame f and from the positions of v and of these neighbors in the previous frame (space-time compression). We introduce here two extrapolating spacetime predictors: the ELP extension of the Lorenzo predictor, developed originally for compressing regularly sampled 4D data sets, and the Replica predictor. ELP may be computed using only additions and subtractions of points and is a perfect predictor for portions of the animation undergoing pure translations. The Replica predictor is slightly more expensive to compute, but is a perfect predictor for arbitrary combinations of translations, rotations, and uniform scaling. For the typical 3D animations that we have compressed, the corrections between the actual and predicted value of the vertex coordinates may be compressed using entropy coding down to an average ranging between 1:37 and 2:91 bits, when the quantization used ranges between 7 and 13 bits. In comparison, space-only compression yields a range of 1:90 to 7:19 bits per coordinate and time-only compressions yields a range of 1:77 to 6:91 bits per coordinate. The implementation of the Dynapack compression and decompression is trivial and extremely fast. It perform a sweep through the animation, only accessing two consecutive frames at a time. Therefore, it is particularly well suited for realtime and outof- core compression, and for streaming decompression.
  • Item
    Geometry Videos: A New Representation for 3D Animations
    (The Eurographics Association, 2003) Briceño, Hector M.; Sander, Pedro V.; McMillan, Leonard; Gortler, Steven; Hoppe, Hugues; D. Breen and M. Lin
    We present the 'Geometry Video', a new data structure to encode animated meshes. Being able to encode animated meshes in a generic source-independent format allows people to share experiences. Changing the viewpoint allows more interaction than the fixed view supported by 2D video. Geometry videos are based on the 'Geometry Image' mesh representation introduced by Gu et al. 4. Our novel data structure provides a way to treat an animated mesh as a video sequence (i.e., 3D image) and is well suited for network streaming. This representation also offers the possibility of applying and adapting existing mature video processing and compression techniques (such as MPEG encoding) to animated meshes. This paper describes an algorithm to generate geometry videos from animated meshes. The main insight of this paper, is that Geometry Videos re-sample and re-organize the geometry information, in such a way, that it becomes very compressible. They provide a unified and intuitive method for level-of-detail control, both in terms of mesh resolution (by scaling the two spatial dimensions) and of frame rate (by scaling the temporal dimension). Geometry Videos have a very uniform and regular structure. Their resource and computational requirements can be calculated exactly, hence making them also suitable for applications requiring level of service guarantees.
  • Item
    Particle-Based Fluid Simulation for Interactive Applications
    (The Eurographics Association, 2003) Müller, Matthias; Charypar, David; Gross, Markus; D. Breen and M. Lin
    Realistically animated fluids can add substantial realism to interactive applications such as virtual surgery simulators or computer games. In this paper we propose an interactive method based on Smoothed Particle Hydrodynamics (SPH) to simulate fluids with free surfaces. The method is an extension of the SPH-based technique by Desbrun to animate highly deformable bodies. We gear the method towards fluid simulation by deriving the force density fields directly from the Navier-Stokes equation and by adding a term to model surface tension effects. In contrast to Eulerian grid-based approaches, the particle-based approach makes mass conservation equations and convection terms dispensable which reduces the complexity of the simulation. In addition, the particles can directly be used to render the surface of the fluid. We propose methods to track and visualize the free surface using point splatting and marching cubes-based surface reconstruction. Our animation method is fast enough to be used in interactive systems and to allow for user interaction with models consisting of up to 5000 particles.
  • Item
    Advected Textures
    (The Eurographics Association, 2003) Neyret, Fabrice; D. Breen and M. Lin
    Game and special effects artists like to rely on textures (image or procedural) to specify the details of surface aspect. In this paper, we address the problem of applying textures to animated fluids. The purpose is to allow artists to increase the details of flowing water, foam, lava, mud, flames, cloud layers, etc. Our first contribution is a new algorithm for advecting textures, which compromises between two contradictory requirements: continuity in space and time and preservation of statistical texture properties. It consist of combining layers of advected (periodically regenerated) parameterizations according to a criterion based on the local accumulated deformation. To correctly achieve this combination, we introduce a way of blending procedural textures while avoiding classical interpolation artifacts. Lastly, we propose a scheme to add and control small scale texture animation amplifying the low resolution simulation. Our results illustrate how these three contributions solve the major visual flaws of textured fluids.
  • Item
    A Real-Time Cloud Modeling, Rendering, and Animation System
    (The Eurographics Association, 2003) Schpok, Joshua; Simons, Joseph; Ebert, David S.; Hansen, Charles; D. Breen and M. Lin
    Modeling and animating complex volumetric natural phenomena, such as clouds, is a difficult task. Most systems are difficult to use, require adjustment of numerous, complex parameters, and are non-interactive. Therefore, we have developed an intuitive, interactive system to artistically model, animate, and render visually convincing volumetric clouds using modern consumer graphics hardware. Our natural, high-level interface models volumetric clouds through the use of qualitative cloud attributes. The animation of the implicit skeletal structures and independent transformation of octaves of noise emulate various environmental conditions. The resulting interactive design, rendering, and animation system produces perceptually convincing volumetric cloud models that can be used in interactive systems or exported for higher quality offline rendering.
  • Item
    An Example-Based Approach for Facial Expression Cloning
    (The Eurographics Association, 2003) Pyun, Hyewon; Kim, Yejin; Chae, Wonseok; Kang, Hyung Woo; Shin, Sung Yong; D. Breen and M. Lin
    In this paper, we present a novel example-based approach for cloning facial expressions of a source model to a target model while reflecting the characteristic features of the target model in the resulting animation. Our approach comprises three major parts: key-model construction, parameterization, and expression blending. We first present an effective scheme for constructing key-models. Given a set of source example key-models and their corresponding target key-models created by animators, we parameterize the target key-models using the source key-models and predefine the weight functions for the parameterized target key-models based on radial basis functions. In runtime, given an input model with some facial expression, we compute the parameter vector of the corresponding output model, to evaluate the weight values for the target key-models and obtain the output model by blending the target key-models with those weights. The resulting animation preserves the facial expressions of the input model as well as the characteristic features of the target model specified by animators. Our method is not only simple and accurate but also fast enough for various real-time applications such as video games or internet broadcasting.
  • Item
    Learning Controls for Blend Shape Based Realistic Facial Animation
    (The Eurographics Association, 2003) Joshi, Pushkar; Tien, Wen C.; Desbrun, Mathieu; Pighin, Frédéric; D. Breen and M. Lin
    Blend shape animation is the method of choice for keyframe facial animation: a set of blend shapes (key facial expressions) are used to define a linear space of facial expressions. However, in order to capture a significant range of complexity of human expressions, blend shapes need to be segmented into smaller regions where key idiosyncracies of the face being animated are present. Performing this segmentation by hand requires skill and a lot of time. In this paper, we propose an automatic, physically-motivated segmentation that learns the controls and parameters directly from the set of blend shapes. We show the usefulness and efficiency of this technique for both, motion-capture animation and keyframing. We also provide a rendering algorithm to enhance the visual realism of a blend shape model.
  • Item
    Geometry-Driven Photorealistic Facial Expression Synthesis
    (The Eurographics Association, 2003) Zhang, Qingshan; Liu, Zicheng; Guo, Baining; Shum, Harry; D. Breen and M. Lin
    Expression mapping (also called performance driven animation) has been a popular method to generate facial animations. One shortcoming of this method is that it does not generate expression details such as the wrinkles due to the skin deformation. In this paper, we provide a solution to this problem. We have developed a geometry-driven facial expression synthesis system. Given the feature point positions (geometry) of a facial expression, our system automatically synthesizes the corresponding expression image which has photorealistic and natural looking expression details. Since the number of feature points required by the synthesis system is in general more than what is available from the performer due to the difficulty of tracking, we have developed a technique to infer the feature point motions from a subset by using an example-based approach. Another application of our system is on expression editing where the user drags the feature points while the system interactively generates facial expressions with skin deformation details.
  • Item
    Vision-based Control of 3D Facial Animation
    (The Eurographics Association, 2003) Chai, Jin-xiang; Xiao, Jing; Hodgins, Jessica; D. Breen and M. Lin
    Controlling and animating the facial expression of a computer-generated 3D character is a difficult problem because the face has many degrees of freedom while most available input devices have few. In this paper, we show that a rich set of lifelike facial actions can be created from a preprocessed motion capture database and that a user can control these actions by acting out the desired motions in front of a video camera. We develop a real-time facial tracking system to extract a small set of animation control parameters from video. Because of the nature of video data, these parameters may be noisy, low-resolution, and contain errors. The system uses the knowledge embedded in motion capture data to translate these low-quality 2D animation control signals into high-quality 3D facial expressions. To adapt the synthesized motion to a new character model, we introduce an efficient expression retargeting technique whose run-time computation is constant independent of the complexity of the character model. We demonstrate the power of this approach through two users who control and animate a wide range of 3D facial expressions of different avatars.
  • Item
    Flexible Automatic Motion Blending with Registration Curves
    (The Eurographics Association, 2003) Kovar, Lucas; Gleicher, Michael; D. Breen and M. Lin
    Many motion editing algorithms, including transitioning and multitarget interpolation, can be represented as instances of a more general operation called motion blending. We introduce a novel data structure called a registration curve that expands the class of motions that can be successfully blended without manual input. Registration curves achieve this by automatically determining relationships involving the timing, local coordinate frame, and constraints of the input motions. We show how registration curves improve upon existing automatic blending methods and demonstrate their use in common blending operations.
  • Item
    Adaptive Wisp Tree - a multiresolution control structure for simulating dynamic clustering in hair motion
    (The Eurographics Association, 2003) Bertails, F.; Kim, T-Y.; Cani, M-P.; Neumann, U.; D. Breen and M. Lin
    Realistic animation of long human hair is difficult due to the number of hair strands and to the complexity of their interactions. Existing methods remain limited to smooth, uniform, and relatively simple hair motion. We present a powerful adaptive approach to modeling dynamic clustering behavior that characterizes complex long-hair motion. The Adaptive Wisp Tree (AWT) is a novel control structure that approximates the large-scale coherent motion of hair clusters as well as small-scaled variation of individual hair strands. The AWT also aids computation efficiency by identifying regions where visible hair motions are likely to occur. The AWT is coupled with a multiresolution geometry used to define the initial hair model. This combined system produces stable animations that exhibit the natural effects of clustering and mutual hair interaction. Our results show that the method is applicable to a wide variety of hair styles.
  • Item
    Aesthetic Edits For Character Animation
    (The Eurographics Association, 2003) Neff, Michael; Fiume, Eugene; D. Breen and M. Lin
    The utility of an interactive tool can be measured by how pervasively it is embedded into a user's work flow. Tools for artists additionally must provide an appropriate level of control over expressive aspects of their work while suppressing unwanted intrusions due to details that are, for the moment, unnecessary. Our focus is on tools that target editing the expressive aspects of character motion. These tools allow animators to work in a way that is more expedient than modifying low-level details, and offers finer control than high level, directorial approaches. To illustrate this approach, we present three such tools, one for varying timing (succession), and two for varying motion shape (amplitude and extent). Succession editing allows the animator to vary the activation times of the joints in the motion. Amplitude editing allows the animator to vary the joint ranges covered during a motion. Extent editing allows an animator to vary how fully a character occupies space during a movement - using space freely or keeping the movement close to his body. We argue that such editing tools can be fully embedded in the workflow of character animators. We present a general animation system in which these and other edits can be defined programmatically. Working in a general pose or keyframe framework, either kinematic or dynamic motion can be generated. This system is extensible to include an arbitrary set of movement edits.
  • Item
    Unsupervised Learning for Speech Motion Editing
    (The Eurographics Association, 2003) Cao, Yong; Faloutsos, Petros; Pighin, Frédéric; D. Breen and M. Lin
    We present a new method for editing speech related facial motions. Our method uses an unsupervised learning technique, Independent Component Analysis (ICA), to extract a set of meaningful parameters without any annotation of the data. With ICA, we are able to solve a blind source separation problem and describe the original data as a linear combination of two sources. One source captures content (speech) and the other captures style (emotion). By manipulating the independent components we can edit the motions in intuitive ways.
  • Item
    An Evaluation of a Cost Metric for Selecting Transitions between Motion Segments
    (The Eurographics Association, 2003) Wang, Jing; Bodenheimer, Bobby; D. Breen and M. Lin
    Designing a rich repertoire of behaviors for virtual humans is an important problem for virtual environments and computer games. One approach to designing such a repertoire is to collect motion capture data and pre-process it to form a structure that can be walked in various orders to re-sequence the data in new ways. In such an approach identifying the location of good transition points in the motion stream is critical. In this paper, we evaluate the cost function described by Lee et al.15 for determining such transition points. Lee et al. proposed an original set of weights for their metric. We compute a set of optimal weights for the cost function using a constrained leastsquares technique. The weights are then evaluated in two ways: first, through a cross-validation study and second, through a medium-scale user study. The cross-validation shows that the optimized weights are robust and work for a wide variety of behaviors. The user study demonstrates that the optimized weights select more appealing transition points than the original weights.
  • Item
    A 2-Stages Locomotion Planner for Digital Actors
    (The Eurographics Association, 2003) Pettré, Julien; Laumond, Jean-Paul; Siméon, Thierry; D. Breen and M. Lin
    This paper presents a solution to the locomotion planning problem for digital actors. The solution is based both on probabilistic motion planning and on motion capture blending and warping. The paper describes the various components of our solution, from the first path planning to the last animation step. An example illustrates the progression of the animation construction all along the presentation.
  • Item
    Trackable Surfaces
    (The Eurographics Association, 2003) Guskov, Igor; Klibanov, Sergey; Bryant, Benjamin; D. Breen and M. Lin
    We introduce a novel approach for real-time non-rigid surface acquisition based on tracking quad marked surfaces. The color-identified quad arrangement allows for automatic feature correspondence, tracking initialization, and simplifies 3D reconstruction. We present a prototype implementation of our approach together with several examples of acquired surface motions.
  • Item
    A Scenario Language to orchestrate Virtual World Evolution
    (The Eurographics Association, 2003) Devillers, Frédéric; Donikian, Stéphane; D. Breen and M. Lin
    Behavioural animation techniques provide autonomous characters with the ability to react credibly in interactive simulations. The direction of these autonomous agents is inherently complex. Typically, simulations evolve according to reactive and cognitive behaviours of autonomous agents. The free flow of actions makes it difficult to precisely control the happening of desired events. In this paper, we propose a scenario language designed to support direction of semi-autonomous characters. This language offers temporal management and character communication tools. It also allows parallelism between scenarios, and a form of competition for the reservation of characters. Seen from the computing angle, this language is generic: in other words, it doesn't make assumptions about the nature of the simulation. Lastly, this language allows a programmer to build scenarios in a variety of different styles ranging from highly directed cinema-like scripts to scenarios which will momentary finely tune free streams of actions.
  • Item
    Constrained Animation of Flocks
    (The Eurographics Association, 2003) Anderson, Matt; McDaniel, Eric; Chenney, Stephen; D. Breen and M. Lin
    Group behaviors are widely used in animation, yet it is difficult to impose hard constraints on their behavior. We describe a new technique for the generation of constrained group animations that improves on existing approaches in two ways: the agents in our simulations meet exact constraints at specific times, and our simulations retain the global properties present in unconstrained motion. Users can position constraints on agents' positions at any time in the animation, or constrain the entire group to meet center of mass or shape constraints. Animations are generated in a two stage process. The first step finds an initial set of trajectories that exactly meet the constraints, but which may violate the behavior rules. The second stage samples new animations that maintain the constraints while improving the motion with respect to the underlying behavioral model. We present a range of animations created with our system.
  • Item
    Mapping optical motion capture data to skeletal motion using a physical model
    (The Eurographics Association, 2003) Zordan, Victor B.; Horst, Nicholas C. Van Der; D. Breen and M. Lin
    Motion capture has become a premiere technique for animation of humanlike characters. To facilitate its use, researchers have focused on the manipulation of data for retargeting, editing, combining, and reusing motion capture libraries. In many of these efforts joint angle plus root trajectories are used as input, although this format requires an inherent mapping from the raw data recorded by many popular motion capture set-ups. In this paper, we propose a novel solution to this mapping problem from 3D marker position data recorded by optical motion capture systems to joint trajectories for a fixed limb-length skeleton using a forward dynamic model. To accomplish the mapping, we attach virtual springs to marker positions located on the appropriate landmarks of a physical simulation and apply resistive torques to the skeleton's joints using a simple controller. For the motion capture samples, joint-angle postures are resolved from the simulation's equilibrium state, based on the internal torques and external forces. Additional constraints, such as foot plants and hand holds, may also be treated as addition forces applied to the system and are a trivial and natural extension to the proposed technique. We present results for our approach as applied to several motion-captured behaviors.
  • Item
    Generating Flying Creatures using Body-Brain Co-Evolution
    (The Eurographics Association, 2003) Shim, Yoon-Sik; Kim, Chang-Hun; D. Breen and M. Lin
    This paper describes a system that produces double-winged flying creatures using body-brain co-evolution without need of complex flapping flight aerodynamics. While artificial life techniques have been used to create a variety of virtual creatures, little work has explored flapping-winged creatures for the difficulty of genetic encoding problem of wings with limited geometric primitives as well as flapping-wing aerodynamics. Despite of the simplicity of system, our result shows aesthetical looking and organic flapping flight locomotions. The restricted list structure is used in genotype encoding for morphological symmetry of creatures and is more easily handled than other data structures. The creatures evolved by this system have two symmetric flapping wings consisting of continuous triangular patches and show various looking and locomotion such as wings of birds, butterflies and bats or even imaginary wings of a dragon and pterosaurs.
  • Item
    On Creating Animated Presentations
    (The Eurographics Association, 2003) Zongker, Douglas E.; Salesin, David H.; D. Breen and M. Lin
    Computers are used to display visuals for millions of live presentations each day, and yet only the tiniest fraction of these make any real use of the powerful graphics hardware available on virtually all of today s machines. In this paper, we describe our efforts toward harnessing this power to create better types of presentations: presentations that include meaningful animation as well as at least a limited degree of interactivity. Our approach has been iterative, alternating between creating animated talks using available tools, then improving the tools to better support the kinds of talk we wanted to make. Through this cyclic design process, we have identified a set of common authoring paradigms that we believe a system for building animated presentations should support. We describe these paradigms and present the latest version of our script-based system for creating animated presentations, called SLITHY. We show several examples of actual animated talks that were created and given with versions of SLITHY, including one talk presented at SIGGRAPH 2000 and four talks presented at SIGGRAPH 2002. Finally, we describe a set of design principles that we have found useful for making good use of animation in presentation.
  • Item
    A Sketching Interface for Articulated Figure Animation
    (The Eurographics Association, 2003) Davis, James; Agrawala, Maneesh; Chuang, Erika; Popovic, Zoran; Salesin, David; D. Breen and M. Lin
    We introduce a new interface for rapidly creating 3D articulated figure animation, from 2D sketches of the character in the desired key frame poses. Since the exact 3D animation corresponding to a set of 2D drawings is ambiguous we first reconstruct the possible 3D configurations and then apply a set of constraints and assumptions to present the user with the most likely 3D pose. The user can refine this candidate pose by choosing among alternate poses proposed by the system. This interface is supported by pose reconstruction and optimization methods specifically designed to work with imprecise hand drawn figures. Our system provides a simple, intuitive and fast interface for creating rough animations that leverages our users existing ability to draw. The resulting key framed sequence can be exported to commercial animation packages for interpolation and additional refinement.
  • Item
    Interactive Control of Component-based Morphing
    (The Eurographics Association, 2003) Zhao, Yonghong; Ong, Hong-Yang; Tan, Tiow-Seng; Xiao, Yongguan; D. Breen and M. Lin
    This paper presents an interactive morphing framework to empower users to conveniently and effectively control the whole morphing process. Although research on mesh morphing has reached a state where most computational problems have been solved in general, the novelty of our framework lies in the integration of global-level and local-level user control through the use of components, and the incorporation of deduction and assistance in user interaction. Given two polygonal meshes, users can choose to specify their requirements either at the global level over components or at the local level within components, whichever is more intuitive. Based on user specifications, the framework proposes several techniques to deduce implied correspondences and add assumed correspondences at both levels. The framework also supports multi-level interpolation control users can operate on a component as a whole or on its individual vertices to specify trajectories. On the whole, in the multi-level componentbased framework, users can choose to specify any number of requirements at each level and the system can complete all other tasks to produce final morphs. Therefore, user control is greatly enhanced and even an amateur can use it to design morphing with ease.
  • Item
    Stylizing Motion with Drawings
    (The Eurographics Association, 2003) Li, Yin; Gleicher, Michael; Xu, Ying-Qing; Shum, Heung-Yeung; D. Breen and M. Lin
    In this paper, we provide a method that injects the expressive shape deformations common in traditional 2D animation into an otherwise rigid 3D motion captured animation. We allow a traditional animator to modify frames in the rendered animation by redrawing the key features such as silhouette curves. These changes are then integrated into the animation. To perform this integration, we divide the changes into those that can be made by altering the skeletal animation, and those that must be made by altering the character's mesh geometry. To propagate mesh changes into other frames, we introduce a new image warping technique that takes into account the character's 3D structure. The resulting technique provides a system where an animator can inject stylization into 3D animation.
  • Item
    FootSee: an Interactive Animation System
    (The Eurographics Association, 2003) Yin, KangKang; Pai, Dinesh K.; D. Breen and M. Lin
    We present an intuitive animation interface that uses a foot pressure sensor pad to interactively control avatars for video games, virtual reality, and low-cost performance-driven animation. During an offline training phase, we capture full body motions with a motion capture system, as well as the corresponding foot-ground pressure distributions with a pressure sensor pad, into a database. At run time, the user acts out the animation desired on the pressure sensor pad. The system then tries to see the motion only through the foot-ground interactions measured, and the most appropriate motions from the database are selected, and edited online to drive the avatar.We describe our motion recognition, motion blending, and inverse kinematics algorithms in detail. They are easy to implement, and cheap to compute. FootSee can control a virtual avatar in a fixed latency of 1 second with reasonable accuracy. Our system thus makes it possible to create interactive animations without the cost or inconveniences of a full body motion capture system.
  • Item
    Sound-by-Numbers: Motion-Driven Sound Synthesis
    (The Eurographics Association, 2003) Cardle, M.; Brooks, S.; Bar-Joseph, Z.; Robinson, P.; D. Breen and M. Lin
    We present the first algorithm for automatically generating soundtracks for input animation based on other animations' soundtrack. This technique can greatly simplify the production of soundtracks in computer animation and video by re-targeting existing soundtracks. A segment of source audio is used to train a statistical model which is then used to generate variants of the original audio to fit particular constraints. These constraints can either be specified explicitly by the user in the form of large-scale properties of the sound texture, or determined automatically and semi-automatically by matching similar motion events in a source animation to those in the target animation.