Volume 22 (2003)
Permanent URI for this community
Browse
Browsing Volume 22 (2003) by Issue Date
Now showing 1 - 20 of 90
Results Per Page
Sort Options
Item Computer Graphics forum(Blackwell Publishing, Inc and Eurographics Association, 2003) Duke, David; Scopigno, RobertoItem Visyllable Based Speech Animation(Blackwell Publishers, Inc and the Eurographics Association, 2003) Kshirsagar, Sumedha; Magnenat-Thalmann, NadiaVisemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic faces involvesextraction of visemes from input speech followed by the application of co-articulation rules to generate realisticanimation. In this paper, we take a novel approach for speech animation - using visyllables, the visual counterpartof syllables. The approach results into a concatenative visyllable based speech animation system. The key contributionof this paper lies in two main areas. Firstly, we define a set of visyllable units for spoken English along withthe associated phonological rules for valid syllables. Based on these rules, we have implemented a syllabificationalgorithm that allows segmentation of a given phoneme stream into syllables and subsequently visyllables. Secondly,we have recorded the database of visyllables using a facial motion capture system. The recorded visyllableunits are post-processed semi-automatically to ensure continuity at the vowel boundaries of the visyllables. We defineeach visyllable in terms of the Facial Movement Parameters (FMP). The FMPs are obtained as a result of thestatistical analysis of the facial motion capture data. The FMPs allow a compact representation of the visyllables.Further, the FMPs also facilitate the formulation of rules for boundary matching and smoothing after concatenatingthe visyllables units. Ours is the first visyllable based speech animation system. The proposed technique iseasy to implement, effective for real-time as well as non real-time applications and results into realistic speechanimation.Categories and Subject Descriptors (according to ACM CCS): 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and RealismItem Efficient Modeling of An Anatomy-Based Face and Fast 3D Facial Expression Synthesis(Blackwell Publishers, Inc and the Eurographics Association, 2003) Zhang, Yu; Prakash, Edmond C.; Sung, EricThis paper presents new methods for efficient modeling and animation of an hierarchical facial model that conforms to the human face anatomy for realistic and fast 3D facial expression synthesis. The facial model has a skin-muscle-skull structure. The deformable skin model directly simulates the nonlinear visco-elastic behavior of soft tissue and effectively prevents model collapse. The construction of facial muscles is achieved by using an efficient muscle mapping approach. Based on a cylindrical projection of the texture-mapped facial surface and wire-frame skin and skull meshes, this approach ensures different muscles to be located at the anatomically correct positions between the skin and skull layers. For computational efficiency, we devise an adaptive simulation algorithm which uses either a semi-implicit integration scheme or a quasi-static solver to compute the relaxation by traversing the designed data structures in a breadth-first order. The algorithm runs in real-time and has successfully synthesized realistic facial expressions.ACM CSS: I.3.5 Computer Graphics: Computational Geometry and Object Modelling-physically based modelling; I.3.7 Computer Graphics: Three-Dimensional Graphics and Realism-animation;Item Join Now!(Blackwell Publishers, Inc and the Eurographics Association, 2003)Item Fast Photo-Realistic Rendering of Trees in Daylight(Blackwell Publishers, Inc and the Eurographics Association, 2003) Qin, Xueying; Nakamae, Eihachiro; Tadamura, Katsumi; Nagai, YasuoWe propose a fast approach for photo-realistic rendering of trees under various kinds of daylight, which is particularlyuseful for the environmental assessment of landscapes. In our approach the 3D tree models are transformedto a quasi-3D tree database registering geometrical and shading information of tree surfaces, i.e. their normalvectors, relative depth, and shadowing of direct sunlight and skylight, by using a combination of 2D buffers.Thus the rendering speed of quasi-3D trees depends on their display sizes only, regardless of the complexity oftheir original 3D tree models. By utilizing a two-step shadowing algorithm, our proposed method can create highquality forest scenes illuminated by both sunlight and skylight at a low cost. It can generate both umbrae andpenumbrae on a tree cast by other trees and any other objects such as buildings or clouds. Transparency, specularreflection and inter-reflection of leaves, which influence the delicate shading effects of trees, can also be simulatedwith verisimilitude.Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three dimensional Graphics and RealismItem Animating Hair with Loosely Connected Particles(Blackwell Publishers, Inc and the Eurographics Association, 2003) Bando, Yosuke; Chen, Bing-Yu; Nishita, TomoyukiThis paper presents a practical approach to the animation of hair at an interactive frame rate. In our approach,we model the hair as a set of particles that serve as sampling points for the volume of the hair, which covers thewhole region where hair is present. The dynamics of the hair, including hair-hair interactions, is simulated usingthe interacting particles. The novelty of this approach is that, as opposed to the traditional way of modeling hair,we release the particles from tight structures that are usually used to represent hair strands or clusters. Therefore,by making the connections between the particles loose while maintaining their overall stiffness, the hair can bedynamically split and merged during lateral motion without losing its lengthwise coherence.Categories and Subject Descriptions (according to ACM CCS): I.3.7 [Computer Graphics]: Three-DimensionalGraphics and Realism, I.3.3 [Computer Graphics]: Picture/Image GenerationItem Soft Object Modelling with Generalised ChainMail - Extending the Boundaries of Web-based Graphics(Blackwell Publishing, Inc and Eurographics Association, 2003) Li, Ying; Brodlie, KenSoft object modelling is crucial in providing realistic simulation of many surgical procedures. High accuracy is achievable using the Finite Element Method (FEM), but significant computational power is required. We are interested in providing Web-based surgical training simulation where such computational power is not available, but in return lower accuracy is often sufficient. A useful alternative to FEM is the 3D ChainMail algorithm that models elements linked in a regular, rectangular mesh, mimicking the behaviour of chainmail armour. An important aspect is the ability to make topology changes for example by cutting - an aspect that FEM finds difficult. Our contribution is to extend the 3D ChainMail technique to arbitrary grids in 2D and 3D. This extends the range of applications that can be addressed by the ChainMail approach, to include surfaces and volumes defined on triangular and tetrahedral meshes. We have successfully deployed the algorithm in a Web-based environment, using VRML and Java linked through the External Authoring Interface.ACM CSS: I.3.5 Computer graphics: Computional Geometry and Object Modelling, I.3.2 Computer Graphics: Graphics Systems, J.3 Life and Medical SciencesItem Real-Time Consensus-Based Scene Reconstruction Using Commodity Graphics Hardware?(Blackwell Science Ltd and the Eurographics Association, 2003) Yang, Ruigang; Welch, Greg; Bishop, GaryWe present a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm with view synthesis for real-time, online 3D scene acquisition and view synthesis. Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, estimate a dense depth map from the current viewpoint, or create a textured triangular mesh. We can do each of these without any prior geometric information or requiring any user interaction, in real time and online. The heart of our method is to use programmable Pixel Shader technology to square intensity differences between reference image pixels, and then to choose final colors (or depths) that correspond to the minimum difference, i.e. the most consistent color. In this paper we describe the method, place it in the context of related work in computer graphics and computer vision, and present some results.ACM CSS: I.3.3 Computer Graphics-Bitmap and framebuffer operations, I.4.8 Image Processing and Computer Vision-Depth cues, StereoItem Announcement(Blackwell Publishers, Inc and the Eurographics Association, 2003)Item On Visual Similarity Based 3D Model Retrieval(Blackwell Publishers, Inc and the Eurographics Association, 2003) Chen, Ding-Yun; Tian, Xiao-Pei; Shen, Yu-Te; Ouhyoung, MingItem Granada, 4 September 2003(Blackwell Publishing, Inc and Eurographics Association, 2003)Item Interactive Rendering of Translucent Objects?(Blackwell Publishers, Inc and the Eurographics Association, 2003) Lensch, Hendrik P.A.; Goesele, Michael; Bekaert, Philippe; Kautz, Jan; Magnor, Marcus A. and Lang, Jochen and Seidel, Hans-PeterThis paper presents a rendering method for translucent objects, in which viewpoint and illumination can be modified at interactive rates. In a preprocessing step, the impulse response to incoming light impinging at each surface point is computed and stored in two different ways: The local effect on close-by surface points is modeled as a per-texel filter kernel that is applied to a texture map representing the incident illumination. The global response (i.e. light shining through the object) is stored as vertex-to-vertex throughput factors for the triangle mesh of the object. During rendering, the illumination map for the object is computed according to the current lighting situation and then filtered by the precomputed kernels. The illumination map is also used to derive the incident illumination on the vertices which is distributed via the vertex-to-vertex throughput factors to the other vertices. The final image is obtained by combining the local and global response. We demonstrate the performance of our method for several models.ACM CSS:I.3.7 Computer Graphics-Three-Dimensional Graphics and Realism Color RadiosityItem The Perspective Silhouette of a Canal Surface(Blackwell Publishers, Inc and the Eurographics Association, 2003) Kim, Ku-Jin; Lee, In-KwonWe present an efficient and robust algorithm for parameterizing the perspective silhouette of a canal surface and detecting each connected component of the silhouette. A canal surface is the envelope of a moving sphere with varying radius, defined by the trajectoryC(t)of its center and a radius functionr(t). This moving sphere,S(t), touches the canal surface at a characteristic circleK(t). We decompose the canal surface into a set of characteristic circles, compute the silhouette points on each characteristic circle, and then parameterize the silhouette curve. The perspective silhouette of the sphereS(t)from a given viewpoint consists of a circleQ(t); by identifying the values oftat whichK(t)andQ(t)touch, we can find all the connected components of the silhouette curve of the canal surface.ACM CSS: I.3.7 Computer Graphics-Three Dimensional Graphics and RealismItem The State of the Art in Flow Visualisation: Feature Extraction and Tracking(Blackwell Publishing, Inc and Eurographics Association, 2003) Post, Frits H.; Vrolijk, Benjamin; Hauser, Helwig; Laramee, Robert S.; Doleisch, HelmutFlow visualisation is an attractive topic in data visualisation, offering great challenges for research. Very large data sets must be processed, consisting of multivariate data at large numbers of grid points, often arranged in many time steps. Recently, the steadily increasing performance of computers again has become a driving force for new advances in flow visualisation, especially in techniques based on texturing, feature extraction, vector field clustering, and topology extraction.In this article we present the state of the art in feature-based flow visualisation techniques. We will present numerous feature extraction techniques, categorised according to the type of feature. Next, feature tracking and event detection algorithms are discussed, for studying the evolution of features in time-dependent data sets. Finally, various visualisation techniques are demonstrated.ACM CSS: I.3.8 Computer Graphics-applicationsItem Rendering and Affect(Blackwell Publishers, Inc and the Eurographics Association, 2003) Duke, D.J.; Barnard, P.J.; Halper, N.; Mellin, M.Previous studies at the intersection between rendering and psychology have concentrated on issues such as realismand acuity. Although such results have been useful in informing development of realistic rendering techniques,studies have shown that the interpretation of images is influenced by factors that have little to do with realism. Inthis paper, we summarize a series of experiments, the most recent of which are reported in a separate paper, thatinvestigate affective (emotive) qualities of images. These demonstrate significant effects that can be utilized withininteractive graphics, particularly via non-photorealistic rendering (NPR). We explain how the interpretation ofthese results requires a high-level model of cognitive information processing, and use such a model to account forrecent empirical results on rendering and judgement.Categories and Subject Descriptors (according to ACM CCS): I.3.m [Computer Graphics]: MiscellaneousItem Auditor-s Report(Blackwell Publishing, Inc and Eurographics Association, 2003)Item Recent Developments and Applications of Haptic Devices(Blackwell Publishers, Inc and the Eurographics Association, 2003) Laycock, S. D.; Day, A. M.Over recent years a variety of haptic feedback devices have been developed and are being used in a number of important applications. They range from joysticks used in the entertainment industry to specialised devices used in medical applications. This paper will describe the recent developments of these devices and show how they have been applied. It also examines how haptic feedback has been combined with visual display devices, such as virtual reality walls and workbenches, in order to improve the immersive experience.ACM CSS: H.5.2 Information Interfaces and Presentation-Haptic I/O; I.3.8 Computer Graphics-Applications; I.6 Simulation and Modelling-ApplicationsItem Interactive Rendering with Bidirectional Texture Functions(Blackwell Publishers, Inc and the Eurographics Association, 2003) Suykens, Frank; Berge, Karl; Lagae, Ares; Dutre, PhilipWe propose a new technique for efficiently rendering bidirectional texture functions (BTFs). A 6D BTF describesthe appearance of a material as a texture that depends on the lighting and viewing directions. As such, a BTFaccommodates self-shadowing, interreflection, and masking effects of a complex material without needing anexplicit representation of the small scale geometry. Our method represents the BTF as a set of spatially varyingapparent BRDFs, that each encode the reflectance field of a single pixel in the BTF. Each apparent BRDF isdecomposed into a product of three or more two-dimensional positive factors using a novel factorization technique,which we call chained matrix factorization (CMF). The proposed factorization technique is fully automatic andsuitable for both BRDFs and apparent BRDFs (which typically exhibit off-specular peaks and non-reciprocity).The main benefit of CMF is that it delivers factors well suited for the limited dynamic range of conventionaltexture maps. After factorization, an efficient representation of the BTF is obtained by clustering the factors intoa compact set of 2D textures. With this compact representation, BTFs can be rendered on recent consumer levelhardware with arbitrary viewing and lighting directions at interactive rates.Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-DimensionalGraphics and RealismItem Fourth Eurographics Workshop on Parallel Graphics and Visualisation(Blackwell Publishers, Inc and the Eurographics Association, 2003) Reinhard, Erik