Volume 25 (2006)
Permanent URI for this community
Browse
Browsing Volume 25 (2006) by Issue Date
Now showing 1 - 20 of 100
Results Per Page
Sort Options
Item GEncode: Geometry-driven compression for General Meshes(The Eurographics Association and Blackwell Publishing Ltd, 2006) Lewiner, Thomas; Craizer, Marcos; Lopes, Helio; Pesco, Sinesio; Velho, Luiz; Medeiros, EsdrasPerformances of actual mesh compression algorithms vary significantly depending on the type of model it encodes. These methods rely on prior assumptions on the mesh to be efficient, such as regular connectivity, simple topology and similarity between its elements. However, these priors are implicit in usual schemes, harming their suitability for specific models. In particular, connectivity-driven schemes are difficult to generalize to higher dimensions and to handle topological singularities. GEncode is a new single-rate, geometry-driven compression scheme where prior knowledge of the mesh is plugged into the coder in an explicit manner. It encodes meshes of arbitrary dimension without topological restrictions, but can incorporate topological properties, such as manifoldness, to improve the compression ratio. Prior knowledge of the geometry is taken as an input of the algorithm, represented by a function of the local geometry. This suits particularly well for scanned and remeshed models, where exact geometric priors are available. Compression results surfaces and volumes are competitive with existing schemes.Item Texture Adaptation for Progressive Meshes(The Eurographics Association and Blackwell Publishing, Inc, 2006) Chen, Chih-Chun; Chuang, Jung-HongLevel-of-detail modeling is a vital representation for real-time applications. To support texture mapping progressive meshes (PM), we usually allow the whole PM sequence to share a common texture map. Although such a common texture map can be derived by using appropriate mesh parameterizations that consider the minimization of geometry stretch, texture stretch, or even the texture deviation introduced by edge collapses, we have found that even with a well parameterized texture map, the texture mapped PM still reveals apparent texture distortion due to geometry changes and the nature of linear interpolation used by texture mapping hardware. In this paper, we propose a novel, simple, and efficient approach that adapts texture content for each edge collapse, aiming to eliminate texture distortion. A texture adaptation and its inverse are local and incremental operations that can be fully supported by texture mapping hardware, the render-to-texture feature, and the fragment shader. Once the necessary correspondence in the partition of texture space is built during the course of PM construction, the texture adaptation or its inverse can be applied on the fly before rendering the simplified or refined model with texture map. We also propose the mechanism of indexing mapping to reduce blurred artifacts due to under-sampling that might be introduced by texture adaptation.Keywords: texture mapping progressive meshes, mesh simplification, mesh parameterization, texture distortion Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Color, shading, shadowing, and textureItem Speed-up Technique for a Local Automatic Colour Equalization Model(The Eurographics Association and Blackwell Publishing Ltd., 2006) Artusi, A.; Gatta, C.; Marini, D.; Purgathofer, W.; Rizzi, A.In this paper we propose a speed-up technique for a local automatic colour equalization operator derived from a model of the human vision system. This method is characterized by local and global filtering effects, that simultaneously achieve different equalization tasks e.g. performing colour and lightness constancy, realizing dynamic image data driven stretching, controlling the contrast. We describe a way to quickly create a filtering mapping function to perform the global component of the mapping. This method is based on singular value decomposition (SVD) applied to sampled and filtered points in the input image. Then, the local information is added computing the basic algorithm on a neighbourhood of each input pixel. A slight quality loss is the price that we have to pay for a speed-up of more than two orders of magnitude of the basic algorithm. We present the results on several images and discuss the efficiency and the drawbacks of the speed-up technique.Item Direct (Re)Meshing for Efficient Surface Processing(The Eurographics Association and Blackwell Publishing, Inc, 2006) Schreiner, John; Scheidegger, Carlos E.; Fleishman, Shachar; Silva, Claudio T.We propose a novel surface remeshing algorithm. While many remeshing algorithms are based on global parametrization or local mesh optimization, our algorithm is closely related to surface reconstruction techniques and it requires no explicit parameterization. Our approach is based on the advancing-front paradigm, and it can be used to both incrementally remesh the complete surface, or simply to remesh a portion of it with a high-quality mesh. It is accurate, fast, robust, and suitable for use with interactive mesh processing applications that require local remeshing. We show a number of applications, including matching the resolution of meshes when doing Boolean operations such as unions and intersections. We also show how to adapt the algorithm to blend and merge mixed-mode objects - for example, to compute the union of a point-set surface and a triangle mesh.Item Editorial(The Eurographics Association and Blackwell Publishing Ltd, 2006) Duke, David; Scopigno, RobertoItem Multiresolution Random Accessible Mesh Compression(The Eurographics Association and Blackwell Publishing, Inc, 2006) Kim, Junho; Choe, Sungyul; Lee, SeungyongThis paper presents a novel approach for mesh compression, which we call multiresolution random accessible mesh compression. In contrast to previous mesh compression techniques, the approach enables us to progressively decompress an arbitrary portion of a mesh without decoding other non-interesting parts. This simultaneous support of random accessibility and progressiveness is accomplished by adapting selective refinement of a multiresolution mesh to the mesh compression domain. We present a theoretical analysis of our connectivity coding scheme and provide several experimental results. The performance of our coder is about 11 bits for connectivity and 21 bits for geometry with 12-bit quantization, which can be considered reasonably good under the constraint that no fixed neighborhood information can be used for coding to support decompression in a random order.Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Computational Geometry and Object ModelingItem Wrinkling Coarse Meshes on the GPU(The Eurographics Association and Blackwell Publishing, Inc, 2006) Loviscach, J.The simulation of complex layers of folds of cloth can be handled through algorithms which take the physical dynamics into account. In many cases, however, it is sufficient to generate wrinkles on a piece of garment which mostly appears spread out. This paper presents a corresponding fully GPU-based, easy-to-control, and robust method to generate and render plausible and detailed folds. This simulation is generated from an animated mesh. A relaxation step ensures that the behavior remains globally consistent. The resulting wrinkle field controls the lighting and distorts the texture in a way which closely simulates an actually deformed surface. No highly tessellated mesh is required to compute the position of the folds or to render them. Furthermore, the solution provides a 3D paint interface through which the user may bias the computation in such a way that folds already appear in the rest pose.Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation, I.3.7 [Computer Graphics]: Color, shading, shadowing, and textureItem Enhancing the Interactive Visualization of Procedurally Encoded Multifield Data with Ellipsoidal Basis Functions(The Eurographics Association and Blackwell Publishing, Inc, 2006) Jang, Yun; Botchen, Ralf P.; Lauser, Andreas; Ebert, David S.; Gaither, Kelly P.; Ertl, ThomasFunctional approximation of scattered data is a popular technique for compactly representing various types of datasets in computer graphics, including surface, volume, and vector datasets. Typically, sums of Gaussians or similar radial basis functions are used in the functional approximation and PC graphics hardware is used to quickly evaluate and render these datasets. Previously, researchers presented techniques for spatially-limited spherical Gaussian radial basis function encoding and visualization of volumetric scalar, vector, and multifield datasets. While truncated radially symmetric basis functions are quick to evaluate and simple for encoding optimization, they are not the most appropriate choice for data that is not radially symmetric and are especially problematic for representing linear, planar, and many non-spherical structures. Therefore, we have developed a volumetric approximation and visualization system using ellipsoidal Gaussian functions which provides greater compression, and visually more accurate encodings of volumetric scattered datasets. In this paper, we extend previous work to use ellipsoidal Gaussians as basis functions, create a rendering system to adapt these basis functions to graphics hardware rendering, and evaluate the encoding effectiveness and performance for both spherical Gaussians and ellipsoidal Gaussians.Categories and Subject Descriptors (according to ACMCCS): I.3.3 [Computer Graphics]: Scientific Visualization, Ellipsoidal Basis Functions, Functional Approximation, Texture AdvectionItem Memory-Conserving Bounding Volume Hierarchies with Coherent Raytracing(The Eurographics Association and Blackwell Publishing Ltd, 2006) Mahovsky, J.; Wyvill, B.Item Undersampled Light Field Rendering by a Plane Sweep(The Eurographics Association and Blackwell Publishing Ltd, 2006) Liu, Yang; Chen, George; Max, Nelson; Hofsetz, Christian; McGuinness, PeterImages synthesized by light field rendering exhibit aliasing artifacts when the light field is undersampled; adding new light field samples improves the image quality and reduces aliasing but new samples are expensive to acquire. Light field rays are traditionally gathered directly from the source images, but new rays can also be inferred through geometry estimation. This paper describes a light field rendering approach based on this principle that estimates geometry from the set of source images using multi-baseline stereo reconstruction to supplement the existing light field rays to meet the minimum sampling requirement. The rendering and reconstruction steps are computed over a set of planes in the scene volume, and output images are synthesized by compositing results from these planes together. The planes are each processed independently and the number of planes can be adjusted to scale the amount of computation to achieve the desired frame rate. The reconstruction fidelity (and by extension image quality) is improved by a library of matching templates to support matches along discontinuities in the image or geometry (e.g. object profiles and concavities). Given a set of silhouette images, the visual hull can be constructed and applied to further improve reconstruction by removing outlier matches. The algorithm is efficiently implemented by a set of image filter operations on commodity graphics hardware and achieves image synthesis at interactive rates.Item Data Preparation for Real-time High Quality Rendering of Complex Models(The Eurographics Association and Blackwell Publishing, Inc, 2006) Klein, ReinhardThe capability of current 3D acquisition systems to digitize the geometry reflection behaviour of objects as well as the sophisticated application of CAD techniques lead to rapidly growing digital models which pose new challenges for interaction and visualization. Due to the sheer size of the geometry as well as the texture and reflection data which are often in the range of several gigabytes, efficient techniques for analyzing, compressing and rendering are needed. In this talk I will present some of the research we did in our graphics group over the past years motivated by industrial partners in order to automate the data preparation step and allow for real-time high quality rendering e.g. in the context of VR-applications. Strength and limitations of the different techniques will be discussed and future challenges will be identified. The presentation will go along with live demonstrations.Item Editorial(The Eurographics Association and Blackwell Publishing Ltd., 2006) Duke, David; Scopigno, RobertoItem A Randomized Approach for Patch-based Texture Synthesis using Wavelets(The Eurographics Association and Blackwell Publishing Ltd, 2006) Tonietto, L.; Walter, M.; Jung, C. R.We present a wavelet-based approach for selecting patches in patch-based texture synthesis. We randomly select the first block that satisfies a minimum error criterion, computed from the wavelet coefficients (using 1D or 2D wavelets) for the overlapping region. We show that our wavelet-based approach improves texture synthesis for samples where previous work fails, mainly textures with prominent aligned features. Also, it generates similar quality textures when compared against texture synthesis using feature maps with the advantage that our proposed method uses implicit edge information (since it is embedded in the wavelet coefficients) whereas feature maps rely explicitly on edge features. In previous work, the best patches are selected among all possible using a L2 norm on the RGB or grayscale pixel values of boundary zones. The L2 metric provides the raw pixel-to-pixel difference, disregarding relevant image structures - such as edges - that are relevant in the human visual system and therefore on synthesis of new textures.Item 27th EUROGRAPHICS General Assembly(The Eurographics Association and Blackwell Publishing Ltd, 2006)Item Tuning Subdivision by Minimising Gaussian Curvature Variation Near Extraordinary Vertices(The Eurographics Association and Blackwell Publishing, Inc, 2006) Augsdoerfer, U.H.; Dodgson, N.A.; Sabin, M.A.We present a method for tuning primal stationary subdivision schemes to give the best possible behaviour near extraordinary vertices with respect to curvature variation.Current schemes lead to a limit surface around extraordinary vertices for which the Gaussian curvature diverges, as demonstrated by Karciauskas et al. [KPR04]. Even when coefficients are chosen such that the subsubdominant eigenvalues, , equal the square of the subdominant eigenvalue,, of the subdivision matrix [DS78] there is still variation in the curvature of the subdivision surface around the extraordinary vertex as shown in recent work by Peters and Reif [PR04] illustrated by Karciauskas et al. [KPR04].In our tuning method we optimise within the space of subdivision schemes with bounded curvature to minimise this variation in curvature around the extraordinary vertex. To demonstrate our method we present results for the Catmull-Clark [CC78], 4-8 [Vel01, VZ01] and 4-3 [PS03] subdivision schemes. We compare our results to previous work on the tuning of these schemes and show that the coefficients derived with this method give a significantly smaller curvature variation around extraordinary vertices.Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling, I.3.6 [Computer Graphics]: Methodology and TechniquesItem The John Lansdown Award 2006(The Eurographics Association and Blackwell Publishing Ltd, 2006) Duce, DavidItem Virtual Reality Course-A Natural Enrichment of Computer Graphics Classes(The Eurographics Association and Blackwell Publishing Ltd., 2006) Zara, J.This paper shows how the Virtual Reality (VR) course can naturally extend and practice a wide range of Computer Graphics (CG) principles and programming techniques. Thanks to real-time processing requirements, attention is paid to efficient modeling conventions, time-saving rendering approaches, and user interfaces allowing a smooth navigation in a virtual environment. All these issues play an important role especially when designing virtual worlds targeted to the web, i.e. utilizing VRML/X3D standards. The paper presents a structure and related information of the VR course that has been taught at various universities during the last 6 years. Our experience clearly demonstrates that students appreciate its contents even if they already completed courses on 3D graphics.Item REPORT OF THE STATUTORY AUDITORS TO THE GENERAL MEETING OF THE MEMBERS OF EUROGRAPHICS ASSOCIATION GENEVA(The Eurographics Association and Blackwell Publishing Ltd, 2006)Item A PBL Experience in the Teaching of Computer Graphics(The Eurographics Association and Blackwell Publishing Ltd., 2006) Marti, E.; Gil, D.; Julia, C.Project-Based Learning (PBL) is an educational strategy to improve student s learning capability that, in recent years, has had a progressive acceptance in undergraduate studies. This methodology is based on solving a problem or project in a student working group. In this way, PBL focuses on learning the necessary tools to correctly find a solution to given problems.Since the learning initiative is transferred to the student, the PBL method promotes students own abilities. This allows a better assessment of the true workload that carries out the student in the subject. It follows that the methodology conforms to the guidelines of the Bologna document, which quantifies the student workload in a subject by means of the European credit transfer system (ECTS).PBL is currently applied in undergraduate studies needing strong practical training such as medicine, nursing or law sciences. Although this is also the case in engineering studies, amazingly, few experiences have been reported. In this paper we propose to use PBL in the educational organization of the Computer Graphics subjects in the Computer Science degree.Our PBL project focuses in the development of a C++ graphical environment based on the OpenGL libraries for visualization and handling of different graphical objects. The starting point is a basic skeleton that already includes lighting functions, perspective projection with mouse interaction to change the point of view and three predefined objects. Students have to complete this skeleton by adding their own functions to solve the project. A total number of 10 projects have been proposed and successfully solved. The exercises range from human face rendering to articulated objects, such as robot arms or puppets. In the present paper we extensively report the statement and educational objectives for two of the projects: solar system visualization and a chess game.We report our earlier educational experience based on the standard classroom theoretical, problem and practice sessions and the reasons that motivated searching for other learning methods. We have mainly chosen PBL because it improves the student learning initiative. We have applied the PBL educational model since the beginning of the second semester. The student s feedback increases in his interest for the subject. We present a comparative study of the teachers and students workload between PBL and the classic teaching approach, which suggests that the workload increase in PBL is not as high as it seems.Item Silhouette Extraction in Hough Space(The Eurographics Association and Blackwell Publishing, Inc, 2006) Olson, Matt; Zhang , HaoObject-space silhouette extraction is an important problem in fields ranging from non-photorealistic computer graphics to medical robotics. We present an efficient silhouette extractor for triangle meshes under perspective projection and make three contributions. First, we describe a novel application of 3D Hough transforms, which allows us to organize mesh data more effectively for silhouette computations than the traditional dual transform. Next, we introduce an incremental silhouette update algorithm which operates on an octree augmented with neighbour information and optimized for efficient low-level traversal. Finally, we present a method for initial extraction of silhouette, using the same data structure, whose performance is linear in the size of the extracted silhouette. We demonstrate significant performance improvements given by our approach over the current state of the art.Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Three-Dimensional Graphics and Realism]: Visible line/surface algorithms