Issue 3
Permanent URI for this collection
Browse
Browsing Issue 3 by Issue Date
Now showing 1 - 20 of 41
Results Per Page
Sort Options
Item The Convergence of Graphics and Imaging(Blackwell Publishers Ltd and the Eurographics Association, 1998) Foley, James D.Over twenty years ago a SIGGRAPH panel session addressed the convergence of computer graphics and image processing. At that time the emphasis was on low-level operations such as filtering to avoid anti-aliasing, and related psycho-physics issues. More recently, Graphics and Imaging are converging at a higher level as we move toward blending the synthetic world of computer-generated images with the real world of computer-captured images. In this talk we describe several research directions that relate to this convergence, and illustrate with specific examples of work at MERL - A Mitsubishi Electric Research Laboratory. These research directions are: Analyzing images of the human face to determine identity and orientation and ultimately to reconstruct the shape of the face. Reconstruction of static and dynamic 3D geometries from 2D images separated in time or space: here the objective is to take multiple images of a real-world scene and recreate the 3D geometry of the scene. If objects in the scene are moving, then the objective is extracting the dynamic geometry. Once the geometry has been reconstructed, editing and relighting of the scene becomes possible. Display of 3D scalar fields (also known as volume graphics) concerns 3D as opposed to 2D images, such as CT and MRI scans. These scans can be thought of as 3D images in that they are point samples of a 3D scalar field, just as a computer-captured image is a point sample of a 2D sample field. The objective of volume graphics is to create and display the 3D geometries that underly 3D images. An inexpensive yet real-time (30 fps for a 256 x 256 x 256 image) implementation of Pfister and Kaufmanâ s Cube-4 rendering architecture will be described.Item Emotion Editing using Finite Elements(Blackwell Publishers Ltd and the Eurographics Association, 1998) Koch, Rolf M.; Gross, Markus H.; Bosshard, Albert A.This paper describes the prototype of a facial expression editor. In contrast to existing systems the presented editor takes advantage of both medical data for the simulation and the consideration of facial anatomy during the definition of muscle groups. The Cl-continuous geometry and the high degree of abstraction for the expression editing sets this system apart from others. Using finite elements we achieve a better precision in comparison to particle systems. Furthermore, a precomputing of facial action units enables us to compose facial expressions by a superposition of facial action geometries in real-time. The presented model is based on a generic facial model using a thin plate and membrane approach for the surface and elastic springs for facial tissue modeling. It has been used successfully for performing facial surgery simulation. We illustrate features of our system with examples from the Visible Human Dataset.TItem Optical Flow Rendering(Blackwell Publishers Ltd and the Eurographics Association, 1998) Park, Tae-Joon; Shin, Sung Yong; Lee, SeungyongThis paper proposes a new approach to image-based rendering that generates an image viewed from an arbitrary camera position and orientation by rendering optical flows extracted from reference images. To derive valid optical flows, we develop an analysis technique that improves the quality of stereo matching. Without using any special equipments such as range cameras, this technique constructs reliable optical flows from a sequence of matching results between reference images. We also derive validity conditions of optical flows and show that the obtained flows satisfy those conditions. Since environment geometry is inferred from the optical flows, we are able to generate more accurate images with this additional geometric information. Our approach makes it possible to combine an image rendered from optical flows with an image generated by a conventional rendering technique through a simple Z-buffer algorithm.Item Screen-Space Constraints for Camera Movements: the Virtual Cameraman(Blackwell Publishers Ltd and the Eurographics Association, 1998) Jardillier, Frank; Languenou, EricThis article presents a virtual cameraman which allows us to obtain the whole set of camera movements satisfying user defined constraints specified in the image space and/or constraints on the objects of the scene. This research follows the "Declarative Modelling" approach, which focuses on a 3-phase modeller concept: description; generation; result exploration.Our tool is based on a solver using interval arithmetic. The time dimension is treated as another variable, thus constraints can be specified for the total duration of the animation or could last only for a given amount of time. There is no keyframing and no interpolation, thereby, for the solutions obtained, the satisfaction of the specified constraints are guaranteed. Several ways to include time dimension efficiently are discussed.We claim that the method is simple enough to be implemented easily without the need of any external solver.Item Using Wavefront Tracing for the Visualization and Optimization of Progressive Lenses(Blackwell Publishers Ltd and the Eurographics Association, 1998) Loos, J.; Slusallek, Ph.; Seidel, H.-P.Progressive addition lenses are a relatively new approach to compensate for defects of the human visual system. While traditional spectacles use rotationally symmetric lenses, progressive lenses require the specification of free-form surfaces. This poses difficult problems for the optimal design and its visual evaluation.This paper presents two new techniques for the visualization of optical systems and the optimization of progressive lenses. Both are based on the same wavefront tracing approach to accurately evaluate the refraction properties of complex optical systems.We use the results of wavefront tracing for continuously re-focusing the eye during rendering. Together with distribution ray tracing, this yields high-quality images that accurately simulate the visual quality of an optical system. The design of progressive lenses is difficult due to the trade-off between the desired properties of the lens and unavoidable optical errors, such as astigmatism and distortions. We use wavefront tracing to derive an accurate error functional describing the desired properties and the optical error across a lens. Minimizing this error yields optimal free-form lens surfaces.While the basic approach is much more general, in this paper, we describe its application to the particular problem of designing and evaluating progressive lenses and demonstrate the benefits of the new approach with several example images.Item Preface(Blackwell Publishers Ltd and the Eurographics Association, 1998) Ferreira, F.; Goebel, MartinItem Simulating Wood Using a Voxel Approach(Blackwell Publishers Ltd and the Eurographics Association, 1998) Buchanan, John W.In this paper we present a technique for generating three-dimensional wood textures using a regular texture array. Currently three-dimensional wood textures are generated using procedural textures. Procedural textures are flexible and require little memory, however the modeling of local artifacts such as knots is difficult using the procedural approach. By representing the wood as a texture array and growing the wood in this array we can easily simulate local phenomena such as knots. Our growth model is an approximation to the biological model and assumes that there are several similar wood cells per array element. This means that we can model artifacts that are defined by groups of similar cells. In particular our model is well suited for the modeling of soft-woods.Item Space Discretization for Efficient Human Navigation(Blackwell Publishers Ltd and the Eurographics Association, 1998) Bandi, Srikanth; Thalmann, DanielThere is a large body of research on motion control of legs in human models. However, they require specification of global paths in which to move. A method for automatically computing a global motion path for a human in 3D environment of obstacles is presented. Object space is discretized into a 3D grid of uniform cells and an optimal path is generated between two points as a discrete cell path. The grid is treated as graph with orthogonal links of uniform cost. A' search method is applied for path finding. By considering only the cells on the upper surface of objects on which human walks, a large portion of the grid is discarded from the search space, thus boosting efficiency. This is expected to be a higher level mechanism for various local foot placement methods in human animation.Item A Two-Pass Hardware-Based Method for Hierarchical Radiosity(Blackwell Publishers Ltd and the Eurographics Association, 1998) Martin, I.; Pueyo, X.; Tost, D.Finite elements methods for radiosity are aimed at computing global illumination solutions efficiently. However these methods are not suitable for obtaining high quality images due to the lack of error control. Two-pass methods allow to achieve that level of quality computing illumination at each pixel and thus introducing a high computing overhead. We present a two-pass method for radiosity that allows to produce high quality images avoiding most of the per-pixel computations. The method computes a coarse hierarchical radiosity solution and then performs a second pass using current graphics hardware accelerators to generate illumination as high definition textures.Item Interactive 3D Morphing(Blackwell Publishers Ltd and the Eurographics Association, 1998) Bao, Hujun; Peng, QunshengA new 3D morphing algorithm for polyhedral objects with the same genus is presented in the paper. Our main contribution is an efficient and general algorithm for setting up the vertex correspondence between the polyhedra. The proposed algorithm first interactively partitions the two original polyhedra into the same number of polygonal patches, the patch correspondence is also established during partitioning. Each pair of corresponding patches is then parametrized and resampled by using the harmonic maps. A feature polyhedron is finally constructed for each original polyhedron, and the vertex correspondence between each original polyhedron and its feature polyhedron is automatically established following a cluster scheme. The shape transition between the original polyhedral models is accomplished by composing three successive transformations using their feature polyhedra as the bridges. Experimental results demonstrate that our algorithm is very robust, and can deal with very general cases (non-zero genus polyhedral cases).Item The Art of Knitted Fabrics, Realistic & Physically Based Modelling Of Knitted Patterns(Blackwell Publishers Ltd and the Eurographics Association, 1998) Meißner, M.; Eberhardt, B.In this paper we will present a system to use three dimensional computer graphics in garment design. This system is capable to visualize the "real", i.e. the physically correct, appearance of a knitted fabric. A fast visualization of a physically correct micro-structure garment is of crucial importance in textile industry, since it enables fast and less expensive product development. This system may be either used in the design of new products or teaching the art of knitted fabrics.We use in our system directly the produced machine-code of the design system for knitting machines. A physical model, a particle system, is used to calculate the dynamics of the micro-structure of the knitted garment.Item Maximum Intensity Projection Using Splatting in Sheared Object Space(Blackwell Publishers Ltd and the Eurographics Association, 1998) Cai, Wenli; Sakas, GeorgiosIn this paper we present a new Maximum Intensity Projection (MIP) algorithm which was implemented employing splatting in a shear-warp context. This algorithm renders a MIP image by first splatting each voxel on two intermediate spaces called "worksheet" and "shear image". Then, the maximum value is evaluated between worksheet and shear image. Finally, shear image is warped on the screen to generate the result image. Different footprints implementing different quality modes are discussed. In addition, we introduced a line encoded indexing speed-up method to obtain interactive speed. This algorithm allows for a quantitative, predictable trade-off between interactivity and image quality.Item Perception Based Color Image Difference(Blackwell Publishers Ltd and the Eurographics Association, 1998) Neumann, Laszlo; Matkovic, Kresimir; Purgathofer, WernerA good image metric is often needed in digital image synthesis. It can be used to check the convergence behavior in progressive methods, to compare images rendered using various rendering methods etc. Since images are rendered to be observed by humans, an image metric should correspond to human perception as well. We propose here a new algorithm which operates in the original image space. There is no need for Fourier or wavelet transforms. Furthermore, the new metric is view distance dependent. The new method uses the contrast sensitivity function. The main idea is to place a number of various rectangles in images, and to compute the CIE LUV average color difference between corresponding rectangles. Errors are then weighted according to the rectangle size and the contrast sensitivity function.Item A Bernstein-Bezier Based Approach to Soft Tissue Simulation(Blackwell Publishers Ltd and the Eurographics Association, 1998) Roth, S.H.; Gross, Markus H.; Turello, Silvio; Carls, Friedrich R.This paper discusses a Finite Element approach for volumetric soft tissue modeling in the context of facial surgery simulation. We elaborate on the underlying physics and address some computational aspects of the finite element discretization.In contrast to existing approaches speed is not our first concern, but we strive for the highest possible accuracy of simulation. We therefore propose an extension of linear elasticity towards incompressibility and nonlinear material behavior, in order to describe the complex properties of human soft tissue more accurately. Furthermore, we incorporate higher order interpolation functions using a Bernstein-Bezier formulation, which has various advantageous properties, such as its integral polynomial form of arbitrary degree, efficient subdivision schemes, and suitability for geometric modeling and rendering. In addition, the use of tetrahedral Finite Elements does not put any restriction on the geometry of the simulated volumes.Experimental results obtained from a synthetic block of soft tissue and from the Visible Human Data Set illustrate the performance of the envisioned model.Item Frontiers in User-Computer Interaction(Blackwell Publishers Ltd and the Eurographics Association, 1998) Van Dam, AndriesIn this age of (near-)adequate computing power, the power and usability of the user interface is as key to an applicationâ s success as its functionality. Most of the code in modern desktop productivity applications resides in the user interface. But despite its centrality, the user interface field is currently in a rut: the WIMP (Windows, Icons, Menus, Point-and-Click GUI based on keyboard and mouse) has evolved little since it was pioneered by Xerox PARC in the early â 70s. Computer and display form factors will change dramatically in the near future and new kinds of interaction devices will soon become available. Desktop environments will be enriched not only with PDAs such as the Newton and Palm Pilot, but also with wearable computers and large-screen displays produced by new projection technology, including office-based immersive virtual reality environments. On the input side, we will finally have speech-recognition and force-feedback devices. Thus we can look forward to user interfaces that are dramatically more powerful and better matched to human sensory capabilities than those dependent solely on keyboard and mouse. 3D interaction widgets controlled by mice or other interaction devices with three or more degrees of freedom are a natural evolution from their two-dimensional WIMP counterparts and can decrease the cognitive distance between widget and task for many tasks that are intrinsically 3D, such as scientific visualization and MCAD. More radical post-WIMP UIs are needed for immersive virtual reality where keyboard and mouse are absent. Immersive VR provides good driving applications for developing post-WIMP UIs based on multimodal interaction that involve more of our senses by combining the use of gesture, speech, and haptics.Item A New Approach for Direct Manipulation of Free-Form Curve(Blackwell Publishers Ltd and the Eurographics Association, 1998) Zheng, J.M.; Chan, K.W.; Gibson, I.There is an increasing demand for more intuitive methods for creating and modifying free-form curves and surfaces in CAD modeling systems. The methods should be based not only on the change of the mathematical parameters, such as control points, knots, and weights, but also on the userâ s specified constraints and shapes. This paper presents a new approach for directly manipulating the shape of a free-form curve, leading to a better control of the curve deformation and a more intuitive CAD modeling interface. The userâ s intended deformation of a curve is automatically converted into the modification of the corresponding NURBS control points and knot sequence of the curve. The algorithm for this approach includes curve elevation, knot refinement, control point repositioning, and knot removal. Several examples shown in this paper demonstrate that the proposed method can be used to deform a NURBS curve into the desired shape. Currently, the algorithm concentrates on the purely geometric consideration. Further work will include the effect of material properties.Item Mass-Spring Simulation using Adaptive Non-Active Points(Blackwell Publishers Ltd and the Eurographics Association, 1998) Howlett, P.; Hewitt, W.T.This paper introduces an adaptive component to a mass-spring system as used in the modelling of cloth for computer animation. The new method introduces non-active points to the model which can adapt the shape of the cloth at inaccuracies. This improves on conventional uniform mass-spring systems by producing more visually pleasing results when simulating the drape of cloth over irregular objects. The computational cost of simulation is decreased by reducing the complexity of collision handling and enabling the use of coarser mass-spring networks.Item Conservative Visibility and Strong Occlusion for Viewspace Partitioning of Densely Occluded Scenes(Blackwell Publishers Ltd and the Eurographics Association, 1998) Cohen-Or, Daniel; Fibich, Gadi; Halperin, Dan; Zadicario, EyalComputing the visibility of out-door scenes is often much harder than of in-door scenes. A typical urban scene, for example, is densely occluded, and it is effective to precompute its visibility space, since from a given point only a small fraction of the scene is visible. The difficulty is that although the majority of objects are hidden, some parts might be visible at a distance in an arbitrary location, and it is not clear how to detect them quickly. In this paper we present a method to partition the viewspace into cells containing a conservative superset of the visible objects. For a given cell the method tests the visibility of all the objects in the scene. For each object it searches for a strong occluder which guarantees that the object is not visible from any point within the cell. We show analytically that in a densely occluded scene, the vast majority of objects are strongly occluded, and the overhead of using conservative visibility (rather than visibility) is small. These results are further supported by our experimental results. We also analyze the cost of the method and discuss its effectiveness.Item Dithered Color Quantization(Blackwell Publishers Ltd and the Eurographics Association, 1998) Buhmann, J. M.; Fellner, Dieter W.; Held, M.; Ketterer, J.; Puzicha, J.Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure.In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost-function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real-world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches.Item Progressive Iso-Surface Extraction from Hierarchical 3D Meshes(Blackwell Publishers Ltd and the Eurographics Association, 1998) Grosso, Roberto; Ertl, ThomasA multiresolution data decomposition offers a fundamental framework supporting compression, progressive transmission, and level-of-detail (LOD) control for large two or three dimensional data sets discretized on complex meshes. In this paper we extend a previously presented algorithm for 3D mesh reduction for volume data based on multilevel finite element approximations in two ways. First, we present efficient data structures which allow to incrementally construct approximations of the volume data at lower or higher resolutions at interactive rates. An abstract description of the mesh hierarchy in terms of a coarse base mesh and a set of integer records offers a high compression potential which is essential for an efficient storage and a progressive network transmission. Based on this mesh hierarchy we then develop a new progressive iso-surface extraction algorithm. For a given iso-value, the corresponding iso-surface can be computed at different levels of resolution. Changing to a higher or coarser resolution will update the surface only in those regions where the volume data is being refined or coarsened. Our approach allows to interactively visualize very large scalar fields like medical data sets, whereas the conventional algorithms would have required at least an order of magnitude more resources.