Issue 3
Permanent URI for this collection
Browse
Browsing Issue 3 by Issue Date
Now showing 1 - 20 of 41
Results Per Page
Sort Options
Item Fast Feature-Based Metamorphosis and Operator Design(Blackwell Publishers Ltd and the Eurographics Association, 1998) Lee, Tong-Yee; Lin, Young-Ching; Sun, Y.N.; Lin, LeeweenMetamorphosis is a powerful visual technique, for producing interesting transition between two images or volume data. Image or volume metamorphosis using simple features provides flexible and easy control of visual effect. The feature-based image warping proposed by Beier and Neely is a brute-force approach. In this paper, first, we propose optimization methods to reduce their warping time without noticeable loss of image quality. Second, we extend our methods to 3D volume data and propose several interesting warping operators allowing global and local metamorphosis of volume data.Item Multiresolution Isosurface Extraction with Adaptive Skeleton Climbing(Blackwell Publishers Ltd and the Eurographics Association, 1998) Poston, Tim; Wong, Tien-Tsin; Heng, Pheng-AnnAn isosurface extraction algorithm which can directly generate multiresolution isosurfaces from volume data is introduced. It generates low resolution isosurfaces, with 4 to 25 times fewer triangles than that generated by marching cubes algorithm, in comparable running times. By climbing from vertices (0-skeleton) to edges (1-skeleton) to faces (2-skeleton), the algorithm constructs boxes which adapt to the geometry of the true isosurface. Unlike previous adaptive marching cubes algorithms, the algorithm does not suffer from the gap-filling problem. Although the triangles in the meshes may not be optimally reduced, it is much faster than postprocessing triangle reduction algorithms. Hence the coarse meshes it produces can be used as the initial starts for the mesh optimization, if mesh optimality is the main concern.Item Dithered Color Quantization(Blackwell Publishers Ltd and the Eurographics Association, 1998) Buhmann, J. M.; Fellner, Dieter W.; Held, M.; Ketterer, J.; Puzicha, J.Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure.In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost-function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real-world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches.Item The Art of Knitted Fabrics, Realistic & Physically Based Modelling Of Knitted Patterns(Blackwell Publishers Ltd and the Eurographics Association, 1998) Meißner, M.; Eberhardt, B.In this paper we will present a system to use three dimensional computer graphics in garment design. This system is capable to visualize the "real", i.e. the physically correct, appearance of a knitted fabric. A fast visualization of a physically correct micro-structure garment is of crucial importance in textile industry, since it enables fast and less expensive product development. This system may be either used in the design of new products or teaching the art of knitted fabrics.We use in our system directly the produced machine-code of the design system for knitting machines. A physical model, a particle system, is used to calculate the dynamics of the micro-structure of the knitted garment.Item A New Approach for Direct Manipulation of Free-Form Curve(Blackwell Publishers Ltd and the Eurographics Association, 1998) Zheng, J.M.; Chan, K.W.; Gibson, I.There is an increasing demand for more intuitive methods for creating and modifying free-form curves and surfaces in CAD modeling systems. The methods should be based not only on the change of the mathematical parameters, such as control points, knots, and weights, but also on the userâ s specified constraints and shapes. This paper presents a new approach for directly manipulating the shape of a free-form curve, leading to a better control of the curve deformation and a more intuitive CAD modeling interface. The userâ s intended deformation of a curve is automatically converted into the modification of the corresponding NURBS control points and knot sequence of the curve. The algorithm for this approach includes curve elevation, knot refinement, control point repositioning, and knot removal. Several examples shown in this paper demonstrate that the proposed method can be used to deform a NURBS curve into the desired shape. Currently, the algorithm concentrates on the purely geometric consideration. Further work will include the effect of material properties.Item Mass-Spring Simulation using Adaptive Non-Active Points(Blackwell Publishers Ltd and the Eurographics Association, 1998) Howlett, P.; Hewitt, W.T.This paper introduces an adaptive component to a mass-spring system as used in the modelling of cloth for computer animation. The new method introduces non-active points to the model which can adapt the shape of the cloth at inaccuracies. This improves on conventional uniform mass-spring systems by producing more visually pleasing results when simulating the drape of cloth over irregular objects. The computational cost of simulation is decreased by reducing the complexity of collision handling and enabling the use of coarser mass-spring networks.Item A Bernstein-Bezier Based Approach to Soft Tissue Simulation(Blackwell Publishers Ltd and the Eurographics Association, 1998) Roth, S.H.; Gross, Markus H.; Turello, Silvio; Carls, Friedrich R.This paper discusses a Finite Element approach for volumetric soft tissue modeling in the context of facial surgery simulation. We elaborate on the underlying physics and address some computational aspects of the finite element discretization.In contrast to existing approaches speed is not our first concern, but we strive for the highest possible accuracy of simulation. We therefore propose an extension of linear elasticity towards incompressibility and nonlinear material behavior, in order to describe the complex properties of human soft tissue more accurately. Furthermore, we incorporate higher order interpolation functions using a Bernstein-Bezier formulation, which has various advantageous properties, such as its integral polynomial form of arbitrary degree, efficient subdivision schemes, and suitability for geometric modeling and rendering. In addition, the use of tetrahedral Finite Elements does not put any restriction on the geometry of the simulated volumes.Experimental results obtained from a synthetic block of soft tissue and from the Visible Human Data Set illustrate the performance of the envisioned model.Item Progressive Iso-Surface Extraction from Hierarchical 3D Meshes(Blackwell Publishers Ltd and the Eurographics Association, 1998) Grosso, Roberto; Ertl, ThomasA multiresolution data decomposition offers a fundamental framework supporting compression, progressive transmission, and level-of-detail (LOD) control for large two or three dimensional data sets discretized on complex meshes. In this paper we extend a previously presented algorithm for 3D mesh reduction for volume data based on multilevel finite element approximations in two ways. First, we present efficient data structures which allow to incrementally construct approximations of the volume data at lower or higher resolutions at interactive rates. An abstract description of the mesh hierarchy in terms of a coarse base mesh and a set of integer records offers a high compression potential which is essential for an efficient storage and a progressive network transmission. Based on this mesh hierarchy we then develop a new progressive iso-surface extraction algorithm. For a given iso-value, the corresponding iso-surface can be computed at different levels of resolution. Changing to a higher or coarser resolution will update the surface only in those regions where the volume data is being refined or coarsened. Our approach allows to interactively visualize very large scalar fields like medical data sets, whereas the conventional algorithms would have required at least an order of magnitude more resources.Item Perception Based Color Image Difference(Blackwell Publishers Ltd and the Eurographics Association, 1998) Neumann, Laszlo; Matkovic, Kresimir; Purgathofer, WernerA good image metric is often needed in digital image synthesis. It can be used to check the convergence behavior in progressive methods, to compare images rendered using various rendering methods etc. Since images are rendered to be observed by humans, an image metric should correspond to human perception as well. We propose here a new algorithm which operates in the original image space. There is no need for Fourier or wavelet transforms. Furthermore, the new metric is view distance dependent. The new method uses the contrast sensitivity function. The main idea is to place a number of various rectangles in images, and to compute the CIE LUV average color difference between corresponding rectangles. Errors are then weighted according to the rectangle size and the contrast sensitivity function.Item Frontiers in User-Computer Interaction(Blackwell Publishers Ltd and the Eurographics Association, 1998) Van Dam, AndriesIn this age of (near-)adequate computing power, the power and usability of the user interface is as key to an applicationâ s success as its functionality. Most of the code in modern desktop productivity applications resides in the user interface. But despite its centrality, the user interface field is currently in a rut: the WIMP (Windows, Icons, Menus, Point-and-Click GUI based on keyboard and mouse) has evolved little since it was pioneered by Xerox PARC in the early â 70s. Computer and display form factors will change dramatically in the near future and new kinds of interaction devices will soon become available. Desktop environments will be enriched not only with PDAs such as the Newton and Palm Pilot, but also with wearable computers and large-screen displays produced by new projection technology, including office-based immersive virtual reality environments. On the input side, we will finally have speech-recognition and force-feedback devices. Thus we can look forward to user interfaces that are dramatically more powerful and better matched to human sensory capabilities than those dependent solely on keyboard and mouse. 3D interaction widgets controlled by mice or other interaction devices with three or more degrees of freedom are a natural evolution from their two-dimensional WIMP counterparts and can decrease the cognitive distance between widget and task for many tasks that are intrinsically 3D, such as scientific visualization and MCAD. More radical post-WIMP UIs are needed for immersive virtual reality where keyboard and mouse are absent. Immersive VR provides good driving applications for developing post-WIMP UIs based on multimodal interaction that involve more of our senses by combining the use of gesture, speech, and haptics.Item Maximum Intensity Projection Using Splatting in Sheared Object Space(Blackwell Publishers Ltd and the Eurographics Association, 1998) Cai, Wenli; Sakas, GeorgiosIn this paper we present a new Maximum Intensity Projection (MIP) algorithm which was implemented employing splatting in a shear-warp context. This algorithm renders a MIP image by first splatting each voxel on two intermediate spaces called "worksheet" and "shear image". Then, the maximum value is evaluated between worksheet and shear image. Finally, shear image is warped on the screen to generate the result image. Different footprints implementing different quality modes are discussed. In addition, we introduced a line encoded indexing speed-up method to obtain interactive speed. This algorithm allows for a quantitative, predictable trade-off between interactivity and image quality.Item Conservative Visibility and Strong Occlusion for Viewspace Partitioning of Densely Occluded Scenes(Blackwell Publishers Ltd and the Eurographics Association, 1998) Cohen-Or, Daniel; Fibich, Gadi; Halperin, Dan; Zadicario, EyalComputing the visibility of out-door scenes is often much harder than of in-door scenes. A typical urban scene, for example, is densely occluded, and it is effective to precompute its visibility space, since from a given point only a small fraction of the scene is visible. The difficulty is that although the majority of objects are hidden, some parts might be visible at a distance in an arbitrary location, and it is not clear how to detect them quickly. In this paper we present a method to partition the viewspace into cells containing a conservative superset of the visible objects. For a given cell the method tests the visibility of all the objects in the scene. For each object it searches for a strong occluder which guarantees that the object is not visible from any point within the cell. We show analytically that in a densely occluded scene, the vast majority of objects are strongly occluded, and the overhead of using conservative visibility (rather than visibility) is small. These results are further supported by our experimental results. We also analyze the cost of the method and discuss its effectiveness.Item Subdivision Schemes for Thin Plate Splines(Blackwell Publishers Ltd and the Eurographics Association, 1998) Weimer, Henrik; Warren, JoeThin plate splines are a well known entity of geometric design. They are defined as the minimizer of a variational problem whose differential operators approximate a simple notion of bending energy. Therefore, thin plate splines approximate surfaces with minimal bending energy and they are widely considered as the standard "fair" surface model. Such surfaces are desired for many modeling and design applications.Traditionally, the way to construct such surfaces is to solve the associated variational problem using finite elements or by using analytic solutions based on radial basis functions. This paper presents a novel approach for defining and computing thin plate splines using subdivision methods. We present two methods for the construction of thin plate splines based on subdivision: A globally supported subdivision scheme which exactly minimizes the energy functional as well as a family of strictly local subdivision schemes which only utilize a small, finite number of distinct subdivision rules and approximately solve the variational problem. A tradeoff between the accuracy of the approximation and the locality of the subdivision scheme is used to pick a particular member of this family of subdivision schemes.Later, we show applications of these approximating subdivision schemes to scattered data interpolation and the design of fair surfaces. In particular we suggest an efficient methodology for finding control points for the local subdivision scheme that will lead to an interpolating limit surface and demonstrate how the schemes can be used for the effective and efficient design of fair surfaces.Item A Two-Pass Hardware-Based Method for Hierarchical Radiosity(Blackwell Publishers Ltd and the Eurographics Association, 1998) Martin, I.; Pueyo, X.; Tost, D.Finite elements methods for radiosity are aimed at computing global illumination solutions efficiently. However these methods are not suitable for obtaining high quality images due to the lack of error control. Two-pass methods allow to achieve that level of quality computing illumination at each pixel and thus introducing a high computing overhead. We present a two-pass method for radiosity that allows to produce high quality images avoiding most of the per-pixel computations. The method computes a coarse hierarchical radiosity solution and then performs a second pass using current graphics hardware accelerators to generate illumination as high definition textures.Item Interactive 3D Morphing(Blackwell Publishers Ltd and the Eurographics Association, 1998) Bao, Hujun; Peng, QunshengA new 3D morphing algorithm for polyhedral objects with the same genus is presented in the paper. Our main contribution is an efficient and general algorithm for setting up the vertex correspondence between the polyhedra. The proposed algorithm first interactively partitions the two original polyhedra into the same number of polygonal patches, the patch correspondence is also established during partitioning. Each pair of corresponding patches is then parametrized and resampled by using the harmonic maps. A feature polyhedron is finally constructed for each original polyhedron, and the vertex correspondence between each original polyhedron and its feature polyhedron is automatically established following a cluster scheme. The shape transition between the original polyhedral models is accomplished by composing three successive transformations using their feature polyhedra as the bridges. Experimental results demonstrate that our algorithm is very robust, and can deal with very general cases (non-zero genus polyhedral cases).Item Importance Driven Halftoning(Blackwell Publishers Ltd and the Eurographics Association, 1998) Streit, L.; Buchanan, J.Most halftoning techniques have been primarily concerned with achieving an accurate reproduction of local gray-scale intensities while avoiding the introduction of artifacts. A secondary concern in halftoning has been the preservation of edges in the halftoned image. In this paper, we will introduce a new halftoning technique that utilizes a bandpass pyramid to achieve an accurate reproduction of important attributes in the image. Ink is distributed through the bandpass pyramid primarily according to a user defined importance function. This technique has three main characteristics. First, our technique can produce results similar to many other halftoning techniques by allowing a generic importance function to be specified. If the chosen importance function is average intensity we obtain results similar to traditional halftoning. We also show how the importance function can be changed to highlight areas with high variance. Second, in addition to changing the importance function, the drawing primitives can also be changed. By using line segments instead of single pixels as drawing primitives we illustrate how edge enhancement can be achieved. Third, this technique allows the user to easily limit the number drawing primitives used. This is useful in limited resource rendering.In addition to providing a tailorable halftoning technique our method can easily be adapted to produce two tone non-photorealistic (NPR) images. We illustrate this by showing how sketched effects can be achieved by aligning the drawing primitives according to different image attributes.Item A Vector Approach for Global Illumination in Ray Tracing(Blackwell Publishers Ltd and the Eurographics Association, 1998) Zaninetti, Jacques; Serpaggi, Xavier; Peroche, BernardThis paper presents a method taking global illumination into account in a ray tracing environment. A vector approach is introduced, which allows to deal with all the types of light paths and the directional properties of materials. Three types of vectors are defined: Direct Light Vectors associated to light sources, Indirect Light Vectors which correspond to light having been diffusely reflected at least once and Caustic Light Vectors which are associated to light rays emitted by sources and reflected and/or transmitted by specular surfaces. These vectors are estimated at a small number of points in the scene. A weighted interpolation between known values allows to reconstruct these vectors for the other points, with the help of a gradient computation for the indirect component. This approach also allows to take uniform area light sources (spherical, rectangular and circular) into account for all the types of vectors. Computed images are thus more accurate and no discretizing of the geometry of the scene is needed.Item A Framework for Synchronized Editing of Multiple Curve Representations(Blackwell Publishers Ltd and the Eurographics Association, 1998) Grimm, Cindy; Ayers, MatthewEditing curves and surfaces is difficult in part because their mathematical representations rarely correspond to most peopleâ s idea of a curve or surface. The implementation (and hence, behavior) of most manipulation tools is intertwined with a particular curve or surface representation; this can make reimplementing the tool with a different representation problematic. A system using a single representation must therefore either limit the types of tools available or convert existing tools to work on the systemâ s representation.In this paper we present a framework for editing curves or surfaces which supports multiple representations and ensures that they stay synchronized. As a proof of concept, we have created a curve editor which contains several tools each of which manipulate one of three different curve representations: polylines, NURBs, and multi-resolution B-splines.Item An Enhanced Spring Model for Information Visualization(Blackwell Publishers Ltd and the Eurographics Association, 1998) Theisel, Holger; Kreuseler, MatthiasIn this paper we present a new technique for visualizing multidimensional information. We describe objects of a higher dimensional information space as small closed free-form-surfaces in the visualization. The location, size and shape of these surfaces describe the original objects in information space uniquely. The underlying enhanced spring model is introduced. The technique is applied to two test data sets.Item A Light Hierarchy for Fast Rendering of Scenes with Many Lights(Blackwell Publishers Ltd and the Eurographics Association, 1998) Paquette, Eric; Poulin, Pierre; Drettakis, GeorgeWe introduce a new data structure in the form of a light hierarchy for efficiently ray-tracing scenes with many light sources. An octree is constructed with the point light sources in a scene. Each node represents all the light sources it contains by means of a virtual light source. We determine bounds on the error committed with this approximation to shade a point, both for the cases of diffuse and specular reflections. These bounds are then used to guide a hierarchical shading algorithm. If the current level of the light hierarchy provides shading of sufficient quality, the approximation is used, thus avoiding the cost of shading for all the light sources contained below this level. Otherwise the descent into the light hierarchy continues.Our approach has been implemented for scenes without occlusion. The results show important acceleration compared to standard ray-tracing (up to 90 times faster) and an important improvement compared to Wardâ s adaptive shadow testing.
- «
- 1 (current)
- 2
- 3
- »