35-Issue 7
Permanent URI for this collection
Browse
Browsing 35-Issue 7 by Title
Now showing 1 - 20 of 52
Results Per Page
Sort Options
Item 3D Body Shapes Estimation from Dressed-Human Silhouettes(The Eurographics Association and John Wiley & Sons Ltd., 2016) Song, Dan; Tong, Ruofeng; Chang, Jian; Yang, Xiaosong; Tang, Min; Zhang, Jian Jun; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiEstimation of 3D body shapes from dressed-human photos is an important but challenging problem in virtual fitting. We propose a novel automatic framework to efficiently estimate 3D body shapes under clothes. We construct a database of 3D naked and dressed body pairs, based on which we learn how to predict 3D positions of body landmarks (which further constrain a parametric human body model) automatically according to dressed-human silhouettes. Critical vertices are selected on 3D registered human bodies as landmarks to represent body shapes, so as to avoid the time-consuming vertices correspondences finding process for parametric body reconstruction. Our method can estimate 3D body shapes from dressed-human silhouettes within 4 seconds, while the fastest method reported previously need 1 minute. In addition, our estimation error is within the size tolerance for clothing industry. We dress 6042 naked bodies with 3 sets of common clothes by physically based cloth simulation technique. To the best of our knowledge, We are the first to construct such a database containing 3D naked and dressed body pairs and our database may contribute to the areas of human body shapes estimation and cloth simulation.Item Adaptive Bas-relief Generation from 3D Object under Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2016) Zhang, Yu-Wei; Zhang, Caiming; Wang, Wenping; Chen, Yanzhao; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiBas-relief is designed to provide 3D perception for the viewers under illumination. For the problem of bas-relief generation from 3D object, most existing methods ignore the influence of illumination on bas-relief appearance. In this paper, we propose a novel method that adaptively generate bas-reliefs with respect to illumination conditions. Given a 3D object and its target appearance, our method finds an adaptive surface that preserves the appearance of the input. We validate our approach through a variety of applications. Experimental results indicate that the proposed approach is effective in producing bas-reliefs with desired appearance under illumination.Item Aesthetic Rating and Color Suggestion for Color Palettes(The Eurographics Association and John Wiley & Sons Ltd., 2016) Kita, Naoki; Miyata, Kazunori; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiA model to rate color combinations that considers human aesthetic preferences is proposed. The proposed method does not assume that a color palette has a specific number of colors, i.e., input is not restricted to a two-, three-, or five-color palettes. We extract features from a color palette whose size does not depend on the number of colors in the palette. The proposed rating prediction model is trained using a human color preference dataset. The model allows a user to extend a color palette, e.g., from three colors to five or seven colors, while retaining color harmony. In addition, we present a color search scheme for a given palette and a customized version of the proposed model for a specific color tone. We demonstrate that the proposed model can also be applied to various palette-based applications.Item Anaglyph Caustics with Motion Parallax(The Eurographics Association and John Wiley & Sons Ltd., 2016) Lancelle, Marcel; Martin, Tobias; Solenthaler, Barbara; Gross, Markus; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiIn this paper we present a method to model and simulate a lens such that its caustic reveals a stereoscopic 3D image when viewed through anaglyph glasses. By interpreting lens dispersion as stereoscopic disparity, our method optimizes the shape and arrangement of prisms constituting the lens, such that the resulting anaglyph caustic corresponds to a given input image defined by intensities and disparities. In addition, a slight change of the lens' distance to the screen causes a 3D parallax effect that can also be perceived without glasses. Our proposed relaxation method carefully balances the resulting pixel intensity and disparity error, while taking the subsequent physical fabrication process into account. We demonstrate our method on a representative set of input images and evaluate the anaglyph caustics using multi-spectral photon tracing. We further show the fabrication of prototype lenses with a laser cutter as a proof of concept.Item Anisotropic Superpixel Generation Based on Mahalanobis Distance(The Eurographics Association and John Wiley & Sons Ltd., 2016) Cai, Yiqi; Guo, Xiaohu; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiSuperpixels have been widely used as a preprocessing step in various computer vision tasks. Spatial compactness and color homogeneity are the two key factors determining the quality of the superpixel representation. In this paper, these two objectives are considered separately and anisotropic superpixels are generated to better adapt to local image content. We develop a unimodular Gaussian generative model to guide the color homogeneity within a superpixel by learning local pixel color variations. It turns out maximizing the log-likelihood of our generative model is equivalent to solving a Centroidal Voronoi Tessellation (CVT) problem. Moreover, we provide the theoretical guarantee that the CVT result is invariant to affine illumination change, which makes our anisotropic superpixel generation algorithm well suited for image/video analysis in varying illumination environment. The effectiveness of our method in image/video superpixel generation is demonstrated through the comparison with other state-of-the-art methods.Item Appearance Harmonization for Single Image Shadow Removal(The Eurographics Association and John Wiley & Sons Ltd., 2016) Ma, Li-Qian; Wang, Jue; Shechtman, Eli; Sunkavalli, Kalyan; Hu, Shi-Min; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiShadow removal is a challenging problem and previous approaches often produce de-shadowed regions that are visually inconsistent with the rest of the image. We propose an automatic shadow region harmonization approach that makes the appearance of a de-shadowed region (produced using any previous technique) compatible with the rest of the image. We use a shadow-guided patch-based image synthesis approach that reconstructs the shadow region using patches sampled from nonshadowed regions. This result is then refined based on the reconstruction confidence to handle unique textures. Qualitative comparisons over a wide range of images, and a quantitative evaluation on a benchmark dataset show that our technique significantly improves upon the state-of-the-art.Item Automatic Modeling of Urban Facades from Raw LiDAR Point Data(The Eurographics Association and John Wiley & Sons Ltd., 2016) Wang, Jun; Xu, Yabin; Remil, Oussama; Xie, Xingyu; Ye, Nan; Wei, Mingqiang; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiModeling of urban facades from raw LiDAR point data remains active due to its challenging nature. In this paper, we propose an automatic yet robust 3D modeling approach for urban facades with raw LiDAR point clouds. The key observation is that building facades often exhibit repetitions and regularities. We hereby formulate repetition detection as an energy optimization problem with a global energy function balancing geometric errors, regularity and complexity of facade structures. As a result, repetitive structures are extracted robustly even in the presence of noise and missing data. By registering repetitive structures, missing regions are completed and thus the associated point data of structures are well consolidated. Subsequently, we detect the potential design intents (i.e., geometric constraints) within structures and perform constrained fitting to obtain the precise structure models. Furthermore, we apply structure alignment optimization to enforce position regularities and employ repetitions to infer missing structures. We demonstrate how the quality of raw LiDAR data can be improved by exploiting data redundancy, and discovering high level structural information (regularity and symmetry). We evaluate our modeling method on a variety of raw LiDAR scans to verify its robustness and effectiveness.Item Decoupled Space and Time Sampling of Motion and Defocus Blur for Unified Rendering of Transparent and Opaque Objects(The Eurographics Association and John Wiley & Sons Ltd., 2016) Widmer, Sven; Wodniok, Dominik; Thul, Daniel; Guthe, Stefan; Goesele, Michael; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiWe propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally-varying fragments (t-fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t-fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv-sampling for depth-of-field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t-fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.Item Direct Shape Optimization for Strengthening 3D Printable Objects(The Eurographics Association and John Wiley & Sons Ltd., 2016) Zhou, Yahan; Kalogerakis, Evangelos; Wang, Rui; Grosse, Ian R.; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiRecently there has been an increasing demand for software that can help designers create functional 3D objects with required physical strength. We introduce a generic and extensible method that directly optimizes a shape subject to physical and geometric constraints. Given an input shape, our method optimizes directly its input mesh representation until it can withstand specified external forces, while remaining similar to the original shape. Our method performs physics simulation and shape optimization together in a unified framework, where the physics simulator is an integral part of the optimizer. We employ geometric constraints to preserve surface details and shape symmetry, and adapt a second-order method with analytic gradients to improve convergence and computation time. Our method provides several advantages over previous work, including the ability to handle general shape deformations, preservation of surface details, and incorporation of user-defined constraints. We demonstrate the effectiveness of our method on a variety of printable 3D objects through detailed simulations as well as physical validations.Item Efficient Modeling of Entangled Details for Natural Scenes(The Eurographics Association and John Wiley & Sons Ltd., 2016) Guérin, Eric; Galin, Eric; Grosbellet, François; Peytavie, Adrien; Génevaux, Jean-David; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiDigital landscape realism often comes from the multitude of details that are hard to model such as fallen leaves, rock piles or entangled fallen branches. In this article, we present a method for augmenting natural scenes with a huge amount of details such as grass tufts, stones, leaves or twigs. Our approach takes advantage of the observation that those details can be approximated by replications of a few similar objects and therefore relies on mass-instancing. We propose an original structure, the Ghost Tile, that stores a huge number of overlapping candidate objects in a tile, along with a pre-computed collision graph. Details are created by traversing the scene with the Ghost Tile and generating instances according to user-defined density fields that allow to sculpt layers and piles of entangled objects while providing control over their density and distribution.Item Efficient Multi-image Correspondences for On-line Light Field Video Processing(The Eurographics Association and John Wiley & Sons Ltd., 2016) Dąbała, Łukasz; Ziegler, Matthias; Didyk, Piotr; Zilly, Frederik; Keinert, Joachim; Myszkowski, Karol; Seidel, Hans-Peter; Rokita, Przemysław; Ritschel, Tobias; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiLight field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off-line process, i. e., time between initial capture and final display is far from real-time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off-the-shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi-resolution Lucas-Kanade correspondence algorithm from a pair of images to an entire array. Special inter-image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state-of-the art light field-to-depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.Item An Efficient Structure-Aware Bilateral Texture Filtering for Image Smoothing(The Eurographics Association and John Wiley & Sons Ltd., 2016) Lin, Ting-Hao; Way, Der-Lor; Shih, Zen-Chung; Tai, Wen-Kai; Chang, Chin-Chen; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiPhotos contain well-structured and plentiful visual information. Edges are active and expressive stimuli for human visual perception. However, it is hard to separate structure from details because edge strength and object scale are entirely different concepts. This paper proposes a structure-aware bilateral texture algorithm to remove texture patterns and preserve structures. Our proposed method is simple and fast, as well as effective in removing textures. Instead of patch shift, smaller patches represent pixels located at structure edges, and original patches represent the texture regions. Specifically, this paper also improves joint bilateral filter to preserve small structures. Moreover, a windowed inherent variation is adapted to distinguish textures and structures for detecting structure edges. Finally, the proposed method produces excellent experimental results. These results are compared to some results of previous studies. Besides, structure-preserving filtering is a critical operation in many image processing applications. Our proposed filter is also demonstrated in many attractive applications, such as seam carving, detail enhancement, artistic rendering, etc.Item Efficient Volumetric PolyCube-Map Construction(The Eurographics Association and John Wiley & Sons Ltd., 2016) Fu, Xiao-Ming; Bai, Chong-Yang; Liu, Yang; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiPolyCubes provide compact representations for closed complex shapes and are essential to many computer graphics applications. Existing automatic PolyCube construction methods usually suffer from poor quality or time-consuming computation. In this paper, we provide a highly efficient method to compute volumetric PolyCube-maps. Given an input tetrahedral mesh, we utilize two novel normal-driven volumetric deformation schemes and a polycube-allowable mesh segmentation to drive the input to a volumetric PolyCube structure. Our method can robustly generate foldover-free and low-distortion PolyCube-maps in practice, and provide a flexible control on the number of corners of Polycubes. Compared with state-of-the-art methods, our method is at least one order of magnitude faster and has better mapping qualities. We demonstrate the efficiency and efficacy of our method in PolyCube construction and all-hexahedral meshing on various complex models.Item An Error Estimation Framework for Many-Light Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2016) Nabata, Kosuke; Iwasaki, Kei; Dobashi, Yoshinori; Nishita, Tomoyuki; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiThe popularity of many-light rendering, which converts complex global illumination computations into a simple sum of the illumination from virtual point lights (VPLs), for predictive rendering has increased in recent years. A huge number of VPLs are usually required for predictive rendering at the cost of extensive computational time. While previous methods can achieve significant speedup by clustering VPLs, none of these previous methods can estimate the total errors due to clustering. This drawback imposes on users tedious trial and error processes to obtain rendered images with reliable accuracy. In this paper, we propose an error estimation framework for many-light rendering. Our method transforms VPL clustering into stratified sampling combined with confidence intervals, which enables the user to estimate the error due to clustering without the costly computing required to sum the illumination from all the VPLs. Our estimation framework is capable of handling arbitrary BRDFs and is accelerated by using visibility caching, both of which make our method more practical. The experimental results demonstrate that our method can estimate the error much more accurately than the previous clustering method.Item An Eulerian Approach for Constructing a Map Between Surfaces With Different Topologies(The Eurographics Association and John Wiley & Sons Ltd., 2016) Park, Hangil; Cho, Youngjin; Bang, Seungbae; Lee, Sung-Hee; Eitan Grinspun and Bernd Bickel and Yoshinori Dobashi3D objects of the same kind often have different topologies, and finding correspondence between them is important for operations such as morphing, attribute transfer, and shape matching. This paper presents a novel method to find the surface correspondence between topologically different surfaces. The method is characterized by deforming the source polygonal mesh to match the target mesh by using the intermediate implicit surfaces, and by performing a topological surgery at the appropriate locations on the mesh. In particular, we propose a mathematically well-defined way to detect the topology change of surface by finding the non-degenerate saddle points of the velocity fields that tracks implicit surfaces. We show the effectiveness and possible applications of the proposed method through several experiments.Item Facial Feature Exaggeration According to Social Psychology of Face Perception(The Eurographics Association and John Wiley & Sons Ltd., 2016) Tian, Lihui; Xiao, Shuangjiu; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiWe propose a personality trait exaggeration system emphasizing the impression of human face in images, based on multi-level features learning and exaggeration. These features are called Personality Trait Model(PTM). Abstract level of PTM is social psychology trait of face perception such as amiable, mean, cute and so on. Concrete level of PTM is shape feature and texture feature. A training phase is presented to learn multi-level features of faces from different images. Statistical survey is taken to label sample images with people's first impressions. From images with the same labels, we capture not only shape features but also texture features to enhance exaggeration effect. Texture feature is expressed by matrix to reflect depth of facial organs, wrinkles and so on. In application phase, original images will be exaggerated using PTM iteratively. And exaggeration rate for each iteration is constrained to keep likeness with the original face. Experimental results demonstrate that our system can emphasize chosen social psychology traits effectively.Item Feature-Aware Pixel Art Animation(The Eurographics Association and John Wiley & Sons Ltd., 2016) Kuo, Ming-Hsun; Yang, Yong-Liang; Chu, Hung-Kuo; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiPixel art is a modern digital art in which high resolution images are abstracted into low resolution pixelated outputs using concise outlines and reduced color palettes. Creating pixel art is a labor intensive and skill-demanding process due to the challenge of using limited pixels to represent complicated shapes. Not surprisingly, generating pixel art animation is even harder given the additional constraints imposed in the temporal domain. Although many powerful editors have been designed to facilitate the creation of still pixel art images, the extension to pixel art animation remains an unexplored direction. Existing systems typically request users to craft individual pixels frame by frame, which is a tedious and error-prone process. In this work, we present a novel animation framework tailored to pixel art images. Our system bases on conventional key-frame animation framework and state-of-the-art image warping techniques to generate an initial animation sequence. The system then jointly optimizes the prominent feature lines of individual frames respecting three metrics that capture the quality of the animation sequence in both spatial and temporal domains. We demonstrate our system by generating visually pleasing animations on a variety of pixel art images, which would otherwise be difficult by applying state-of-the-art techniques due to severe artifacts.Item Flow Curves: an Intuitive Interface for Coherent Scene Deformation(The Eurographics Association and John Wiley & Sons Ltd., 2016) Ciccone, Loïc; Guay, Martin; Sumner, Robert W.; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiEffective composition in visual arts relies on the principle of movement, where the viewer's eye is directed along subjective curves to a center of interest. We call these curves subjective because they may span the edges and/or center-lines of multiple objects, as well as contain missing portions which are automatically filled by our visual system. By carefully coordinating the shape of objects in a scene, skilled artists direct the viewer's attention via strong subjective curves. While traditional 2D sketching is a natural fit for this task, current 3D tools are object-centric and do not accommodate coherent deformation of multiple shapes into smooth flows. We address this shortcoming with a new sketch-based interface called Flow Curves which allows coordinating deformation across multiple objects. Core components of our method include an understanding of the principle of flow, algorithms to automatically identify subjective curve elements that may span multiple disconnected objects, and a deformation representation tailored to the view-dependent nature of scene movement. As demonstrated in our video, sketching flow curves requires significantly less time than using traditional 3D editing workflows.Item Foveated Real-Time Ray Tracing for Head-Mounted Displays(The Eurographics Association and John Wiley & Sons Ltd., 2016) Weier, Martin; Roth, Thorsten; Kruijff, Ernst; Hinkenjann, André; Pérard-Gayot, Arsène; Slusallek, Philipp; Li, Yongmin; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiHead-mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G-Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real-time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non-trivial static scenes for the Oculus DK2 HMD at 1182x1464 per eye within the the VSync limits without perceived visual differences.Item Geometrically Based Linear Iterative Clustering for Quantitative Feature Correspondence(The Eurographics Association and John Wiley & Sons Ltd., 2016) Yan, Qingan; Yang, Long; Liang, Chao; Liu, Huajun; Hu, Ruimin; Xiao, Chunxia; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiA major challenge in feature matching is the lack of objective criteria to determine corresponding points. Recent methods find match candidates first by exploring the proximity in descriptor space, and then rely on a ratio-test strategy to determine final correspondences. However, these measurements are heuristic and subjectively excludes massive true positive correspondences that should be matched. In this paper, we propose a novel feature matching algorithm for image collections, which is capable of providing quantitative depiction to the plausibility of feature matches. We achieve this by exploring the epipolar consistency between feature points and their potential correspondences, and reformulate feature matching as an optimization problem in which the overall geometric inconsistency across the entire image set ought to be minimized. We derive the solution of the optimization problem in a simple linear iterative manner, where a k-means-type approach is designed to automatically generate consistent feature clusters. Experiments show that our method produces precise correspondences on a variety of image sets and retrieves many matches that are subjectively rejected by recent methods. We also demonstrate the usefulness of the framework in structure from motion task for denser point cloud reconstruction.
- «
- 1 (current)
- 2
- 3
- »