VVG03
Permanent URI for this collection
Browse
Browsing VVG03 by Subject "Animation"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Cartoon-Style Rendering of Motion from Video(The Eurographics Association, 2003) Collomosse, J.P.; Hall, P.M.; Peter Hall and Philip WillisThe contribution of this paper is a novel non-photorealistic rendering (NPR) system capable of rendering motion within a video sequence in artistic styles. A variety of cartoon-style motion cues may be inserted into a video sequence, including augmentation cues (such as streak lines, ghosting, or blurring) and deformation cues (such as squash and stretch or drag effects). Users may select from the gamut of available styles by setting parameters which in uence the placement and appearance of motion cues. Our system draws upon techniques from both the vision and the graphics communities to analyse and render motion and is entirely automatic, aside from minimal user interaction to bootstrap a feature tracker. We demonstrate successful application of our system to a variety of subjects with complexities ranging from simple oscillatory to articulated motion, under both static and moving camera conditions with occlusion present. We conclude with a critical appraisal of the system and discuss directions for future work.Item Use and Re-use of Facial Motion CaptureData(The Eurographics Association, 2003) Lorenzo, M.S.; Edge, J.D.; King, S.A.; Maddock, S.; Peter Hall and Philip WillisMotion capture (mocap) data is commonly used to recreate complex human motions in computer graphics. Markers are placed on an actor, and the captured movement of these markers allows us to animate computer-generated characters. Technologies have been introduced which allow this technique to be used not only to retrieve rigid body transformations, but also soft body motion such as the facial movement of an actor. The inherent difficulties of working with facial mocap lies in the application of a discrete sampling of surface points to animate a fine discontinuous mesh. Furthermore, in the general case, where the morphology of the actor's face does not coincide with that of the model we wish to animate, some form of retargetting must be applied. In this paper we discuss methods to animate face meshes from mocap data with minimal user intervention using a surface-oriented deformation paradigm.