SCA 03: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this collection
Browse
Browsing SCA 03: Eurographics/SIGGRAPH Symposium on Computer Animation by Title
Now showing 1 - 20 of 38
Results Per Page
Sort Options
Item A 2-Stages Locomotion Planner for Digital Actors(The Eurographics Association, 2003) PettrƩ, Julien; Laumond, Jean-Paul; SimƩon, Thierry; D. Breen and M. LinThis paper presents a solution to the locomotion planning problem for digital actors. The solution is based both on probabilistic motion planning and on motion capture blending and warping. The paper describes the various components of our solution, from the first path planning to the last animation step. An example illustrates the progression of the animation construction all along the presentation.Item Adaptive Wisp Tree - a multiresolution control structure for simulating dynamic clustering in hair motion(The Eurographics Association, 2003) Bertails, F.; Kim, T-Y.; Cani, M-P.; Neumann, U.; D. Breen and M. LinRealistic animation of long human hair is difficult due to the number of hair strands and to the complexity of their interactions. Existing methods remain limited to smooth, uniform, and relatively simple hair motion. We present a powerful adaptive approach to modeling dynamic clustering behavior that characterizes complex long-hair motion. The Adaptive Wisp Tree (AWT) is a novel control structure that approximates the large-scale coherent motion of hair clusters as well as small-scaled variation of individual hair strands. The AWT also aids computation efficiency by identifying regions where visible hair motions are likely to occur. The AWT is coupled with a multiresolution geometry used to define the initial hair model. This combined system produces stable animations that exhibit the natural effects of clustering and mutual hair interaction. Our results show that the method is applicable to a wide variety of hair styles.Item Advected Textures(The Eurographics Association, 2003) Neyret, Fabrice; D. Breen and M. LinGame and special effects artists like to rely on textures (image or procedural) to specify the details of surface aspect. In this paper, we address the problem of applying textures to animated fluids. The purpose is to allow artists to increase the details of flowing water, foam, lava, mud, flames, cloud layers, etc. Our first contribution is a new algorithm for advecting textures, which compromises between two contradictory requirements: continuity in space and time and preservation of statistical texture properties. It consist of combining layers of advected (periodically regenerated) parameterizations according to a criterion based on the local accumulated deformation. To correctly achieve this combination, we introduce a way of blending procedural textures while avoiding classical interpolation artifacts. Lastly, we propose a scheme to add and control small scale texture animation amplifying the low resolution simulation. Our results illustrate how these three contributions solve the major visual flaws of textured fluids.Item Aesthetic Edits For Character Animation(The Eurographics Association, 2003) Neff, Michael; Fiume, Eugene; D. Breen and M. LinThe utility of an interactive tool can be measured by how pervasively it is embedded into a user's work flow. Tools for artists additionally must provide an appropriate level of control over expressive aspects of their work while suppressing unwanted intrusions due to details that are, for the moment, unnecessary. Our focus is on tools that target editing the expressive aspects of character motion. These tools allow animators to work in a way that is more expedient than modifying low-level details, and offers finer control than high level, directorial approaches. To illustrate this approach, we present three such tools, one for varying timing (succession), and two for varying motion shape (amplitude and extent). Succession editing allows the animator to vary the activation times of the joints in the motion. Amplitude editing allows the animator to vary the joint ranges covered during a motion. Extent editing allows an animator to vary how fully a character occupies space during a movement - using space freely or keeping the movement close to his body. We argue that such editing tools can be fully embedded in the workflow of character animators. We present a general animation system in which these and other edits can be defined programmatically. Working in a general pose or keyframe framework, either kinematic or dynamic motion can be generated. This system is extensible to include an arbitrary set of movement edits.Item Blowing in the Wind(The Eurographics Association, 2003) Wei, Xiaoming; Zhao, Ye; Fan, Zhe; Li, Wei; Yoakum-Stover, Suzanne; Kaufman, Arie; D. Breen and M. LinWe present an approach for simulating the natural dynamics that emerge from the coupling of a flow field to lightweight, mildly deformable objects immersed within it. We model the flow field using a Lattice Boltzmann Model (LBM) extended with a subgrid model and accelerate the computation on commodity graphics hardware to achieve real-time simulations. We demonstrate our approach using soap bubbles and a feather blown by wind fields, yet our approach is general enough to apply to other light-weight objects. The soap bubbles illustrate Fresnel reflection, reveal the dynamics of the unseen flow field in which they travel, and display spherical harmonics in their undulations. The free feather floats and flutters in response to lift and drag forces. Our single bubble simulation allows the user to directly interact with the wind field and thereby influence the dynamics in real time.Item Constrained Animation of Flocks(The Eurographics Association, 2003) Anderson, Matt; McDaniel, Eric; Chenney, Stephen; D. Breen and M. LinGroup behaviors are widely used in animation, yet it is difficult to impose hard constraints on their behavior. We describe a new technique for the generation of constrained group animations that improves on existing approaches in two ways: the agents in our simulations meet exact constraints at specific times, and our simulations retain the global properties present in unconstrained motion. Users can position constraints on agents' positions at any time in the animation, or constrain the entire group to meet center of mass or shape constraints. Animations are generated in a two stage process. The first step finds an initial set of trajectories that exactly meet the constraints, but which may violate the behavior rules. The second stage samples new animations that maintain the constraints while improving the motion with respect to the underlying behavioral model. We present a range of animations created with our system.Item Construction and Animation of Anatomically Based Human Hand Models(The Eurographics Association, 2003) Albrecht, Irene; Haber, Jƶrg; Seidel, Hans-Peter; D. Breen and M. LinThe human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.Item Discrete Shells(The Eurographics Association, 2003) Grinspun, Eitan; Hirani, Anil N.; Desbrun, Mathieu; Schrƶder, Peter; D. Breen and M. LinIn this paper we introduce a discrete shell model describing the behavior of thin flexible structures, such as hats, leaves, and aluminum cans, which are characterized by a curved undeformed configuration. Previously such models required complex continuum mechanics formulations and correspondingly complex algorithms. We show that a simple shell model can be derived geometrically for triangle meshes and implemented quickly by modifying a standard cloth simulator. Our technique convincingly simulates a variety of curved objects with materials ranging from paper to metal, as we demonstrate with several examples including a comparison of a real and simulated falling hat.Item Dynapack: Space-Time compression of the 3D animations of triangle meshes with fixed connectivity(The Eurographics Association, 2003) Ibarria, Lawrence; Rossignac, Jarek; D. Breen and M. LinDynapack exploits space-time coherence to compress the consecutive frames of the 3D animations of triangle meshes of constant connectivity. Instead of compressing each frame independently (space-only compression) or compressing the trajectory of each vertex independently (time-only compression), we predict the position of each vertex v of frame f from three of its neighbors in frame f and from the positions of v and of these neighbors in the previous frame (space-time compression). We introduce here two extrapolating spacetime predictors: the ELP extension of the Lorenzo predictor, developed originally for compressing regularly sampled 4D data sets, and the Replica predictor. ELP may be computed using only additions and subtractions of points and is a perfect predictor for portions of the animation undergoing pure translations. The Replica predictor is slightly more expensive to compute, but is a perfect predictor for arbitrary combinations of translations, rotations, and uniform scaling. For the typical 3D animations that we have compressed, the corrections between the actual and predicted value of the vertex coordinates may be compressed using entropy coding down to an average ranging between 1:37 and 2:91 bits, when the quantization used ranges between 7 and 13 bits. In comparison, space-only compression yields a range of 1:90 to 7:19 bits per coordinate and time-only compressions yields a range of 1:77 to 6:91 bits per coordinate. The implementation of the Dynapack compression and decompression is trivial and extremely fast. It perform a sweep through the animation, only accessing two consecutive frames at a time. Therefore, it is particularly well suited for realtime and outof- core compression, and for streaming decompression.Item Estimating Cloth Simulation Parameters from Video(The Eurographics Association, 2003) Bhat, Kiran S.; Twigg, Christopher D.; Hodgins, Jessica K.; Khosla, Pradeep K.; Popovic, Zoran; Seitz, Steven M.; D. Breen and M. LinCloth simulations are notoriously difficult to tune due to the many parameters that must be adjusted to achieve the look of a particular fabric. In this paper, we present an algorithm for estimating the parameters of a cloth simulation from video data of real fabric. A perceptually motivated metric based on matching between folds is used to compare video of real cloth with simulation. This metric compares two video sequences of cloth and returns a number that measures the differences in their folds. Simulated annealing is used to minimize the frame by frame error between the metric for a given simulation and the real-world footage. To estimate all the cloth parameters, we identify simple static and dynamic calibration experiments that use small swatches of the fabric. To demonstrate the power of this approach, we use our algorithm to find the parameters for four different fabrics. We show the match between the video footage and simulated motion on the calibration experiments, on new video sequences for the swatches, and on a simulation of a full skirt.Item An Evaluation of a Cost Metric for Selecting Transitions between Motion Segments(The Eurographics Association, 2003) Wang, Jing; Bodenheimer, Bobby; D. Breen and M. LinDesigning a rich repertoire of behaviors for virtual humans is an important problem for virtual environments and computer games. One approach to designing such a repertoire is to collect motion capture data and pre-process it to form a structure that can be walked in various orders to re-sequence the data in new ways. In such an approach identifying the location of good transition points in the motion stream is critical. In this paper, we evaluate the cost function described by Lee et al.15 for determining such transition points. Lee et al. proposed an original set of weights for their metric. We compute a set of optimal weights for the cost function using a constrained leastsquares technique. The weights are then evaluated in two ways: first, through a cross-validation study and second, through a medium-scale user study. The cross-validation shows that the optimized weights are robust and work for a wide variety of behaviors. The user study demonstrates that the optimized weights select more appealing transition points than the original weights.Item An Example-Based Approach for Facial Expression Cloning(The Eurographics Association, 2003) Pyun, Hyewon; Kim, Yejin; Chae, Wonseok; Kang, Hyung Woo; Shin, Sung Yong; D. Breen and M. LinIn this paper, we present a novel example-based approach for cloning facial expressions of a source model to a target model while reflecting the characteristic features of the target model in the resulting animation. Our approach comprises three major parts: key-model construction, parameterization, and expression blending. We first present an effective scheme for constructing key-models. Given a set of source example key-models and their corresponding target key-models created by animators, we parameterize the target key-models using the source key-models and predefine the weight functions for the parameterized target key-models based on radial basis functions. In runtime, given an input model with some facial expression, we compute the parameter vector of the corresponding output model, to evaluate the weight values for the target key-models and obtain the output model by blending the target key-models with those weights. The resulting animation preserves the facial expressions of the input model as well as the characteristic features of the target model specified by animators. Our method is not only simple and accurate but also fast enough for various real-time applications such as video games or internet broadcasting.Item Feel the 'Fabric': An Audio-Haptic Interface(The Eurographics Association, 2003) Huang, G.; Metaxas, D.; Govindaraj, M.; D. Breen and M. LinAn objective fabric modeling system should convey not only the visual but also the haptic and audio sensory feedbacks to remote/internet users via an audio-haptic interface. In this paper we develop a fabric surface property modeling system consisting of a stylus based fabric characteristic sound modeling, and an audio-haptic interface. By using a stylus, people can perceive fabrics surface roughness, friction, and softness though not as precisely as with their bare fingers. The audio-haptic interface is intended to simulate the case of "feeling a virtually fixed fabric via a rigid stylus" by using the PHANToM haptic interface. We develop a DFFT based correlation-restoration method to model the surface roughness and friction coefficient of a fabric, and a physically based method to model the sound of a fabric when rubbed by a stylus. The audio-haptic interface, which renders synchronized auditory and haptic stimuli when the virtual stylus rubs on the surface of a virtual fabric, is developed in VC++6.0 by using OpenGL and the PHANToM GHOST SDK. We asked subjects to test our audio-haptic interface and they were able to differentiate the surface properties of virtual fabrics in the correct order. We show that the virtual fabric is a good modeling of the real counterpart.Item Finite Volume Methods for the Simulation of Skeletal Muscle(The Eurographics Association, 2003) Teran, J.; Blemker, S.; Hing, V. Ng Thow; Fedkiw, R.; D. Breen and M. LinSince it relies on a geometrical rather than a variational framework, many find the finite volume method (FVM) more intuitive than the finite element method (FEM).We show that the FVM allows one to interpret the stress inside a tetrahedron as a simple 'multidimensional force' pushing on each face. Moreover, this interpretation leads to a heuristic method for calculating the force on each node, which is as simple to implement and comprehend as masses and springs. In the finite volume spirit, we also present a geometric rather than interpolating function definition of strain. We use the FVM and a quasi-incompressible, transversely isotropic, hyperelastic constitutive model to simulate contracting muscle tissue. B-spline solids are used to model fiber directions, and the muscle activation levels are derived from key frame animations.Item Flexible Automatic Motion Blending with Registration Curves(The Eurographics Association, 2003) Kovar, Lucas; Gleicher, Michael; D. Breen and M. LinMany motion editing algorithms, including transitioning and multitarget interpolation, can be represented as instances of a more general operation called motion blending. We introduce a novel data structure called a registration curve that expands the class of motions that can be successfully blended without manual input. Registration curves achieve this by automatically determining relationships involving the timing, local coordinate frame, and constraints of the input motions. We show how registration curves improve upon existing automatic blending methods and demonstrate their use in common blending operations.Item FootSee: an Interactive Animation System(The Eurographics Association, 2003) Yin, KangKang; Pai, Dinesh K.; D. Breen and M. LinWe present an intuitive animation interface that uses a foot pressure sensor pad to interactively control avatars for video games, virtual reality, and low-cost performance-driven animation. During an offline training phase, we capture full body motions with a motion capture system, as well as the corresponding foot-ground pressure distributions with a pressure sensor pad, into a database. At run time, the user acts out the animation desired on the pressure sensor pad. The system then tries to see the motion only through the foot-ground interactions measured, and the most appropriate motions from the database are selected, and edited online to drive the avatar.We describe our motion recognition, motion blending, and inverse kinematics algorithms in detail. They are easy to implement, and cheap to compute. FootSee can control a virtual avatar in a fixed latency of 1 second with reasonable accuracy. Our system thus makes it possible to create interactive animations without the cost or inconveniences of a full body motion capture system.Item Generating Flying Creatures using Body-Brain Co-Evolution(The Eurographics Association, 2003) Shim, Yoon-Sik; Kim, Chang-Hun; D. Breen and M. LinThis paper describes a system that produces double-winged flying creatures using body-brain co-evolution without need of complex flapping flight aerodynamics. While artificial life techniques have been used to create a variety of virtual creatures, little work has explored flapping-winged creatures for the difficulty of genetic encoding problem of wings with limited geometric primitives as well as flapping-wing aerodynamics. Despite of the simplicity of system, our result shows aesthetical looking and organic flapping flight locomotions. The restricted list structure is used in genotype encoding for morphological symmetry of creatures and is more easily handled than other data structures. The creatures evolved by this system have two symmetric flapping wings consisting of continuous triangular patches and show various looking and locomotion such as wings of birds, butterflies and bats or even imaginary wings of a dragon and pterosaurs.Item Geometry Videos: A New Representation for 3D Animations(The Eurographics Association, 2003) BriceƱo, Hector M.; Sander, Pedro V.; McMillan, Leonard; Gortler, Steven; Hoppe, Hugues; D. Breen and M. LinWe present the 'Geometry Video', a new data structure to encode animated meshes. Being able to encode animated meshes in a generic source-independent format allows people to share experiences. Changing the viewpoint allows more interaction than the fixed view supported by 2D video. Geometry videos are based on the 'Geometry Image' mesh representation introduced by Gu et al. 4. Our novel data structure provides a way to treat an animated mesh as a video sequence (i.e., 3D image) and is well suited for network streaming. This representation also offers the possibility of applying and adapting existing mature video processing and compression techniques (such as MPEG encoding) to animated meshes. This paper describes an algorithm to generate geometry videos from animated meshes. The main insight of this paper, is that Geometry Videos re-sample and re-organize the geometry information, in such a way, that it becomes very compressible. They provide a unified and intuitive method for level-of-detail control, both in terms of mesh resolution (by scaling the two spatial dimensions) and of frame rate (by scaling the temporal dimension). Geometry Videos have a very uniform and regular structure. Their resource and computational requirements can be calculated exactly, hence making them also suitable for applications requiring level of service guarantees.Item Geometry-Driven Photorealistic Facial Expression Synthesis(The Eurographics Association, 2003) Zhang, Qingshan; Liu, Zicheng; Guo, Baining; Shum, Harry; D. Breen and M. LinExpression mapping (also called performance driven animation) has been a popular method to generate facial animations. One shortcoming of this method is that it does not generate expression details such as the wrinkles due to the skin deformation. In this paper, we provide a solution to this problem. We have developed a geometry-driven facial expression synthesis system. Given the feature point positions (geometry) of a facial expression, our system automatically synthesizes the corresponding expression image which has photorealistic and natural looking expression details. Since the number of feature points required by the synthesis system is in general more than what is available from the performer due to the difficulty of tracking, we have developed a technique to infer the feature point motions from a subset by using an example-based approach. Another application of our system is on expression editing where the user drags the feature points while the system interactively generates facial expressions with skin deformation details.Item Handrix: Animating the Human Hand(The Eurographics Association, 2003) Koura, George El; Singh, Karan; D. Breen and M. LinThe human hand is a complex organ capable of both gross grasp and fine motor skills. Despite many successful high-level skeletal control techniques, animating realistic hand motion remains tedious and challenging. This paper presents research motivated by the complex finger positioning required to play musical instruments, such as the guitar. We first describe a data driven algorithm to add sympathetic finger motion to arbitrarily animated hands. We then present a procedural algorithm to generate the motion of the fretting hand playing a given musical passage on a guitar. The work here is aimed as a tool for music education and analysis. The contributions of this paper are a general architecture for the skeletal control of interdependent articulations performing multiple concurrent reaching tasks, and a procedural tool for musicians and animators that captures the motion complexity of guitar fingering.