39-Issue 7
Permanent URI for this collection
Browse
Browsing 39-Issue 7 by Title
Now showing 1 - 20 of 54
Results Per Page
Sort Options
Item Adjustable Constrained Soft-Tissue Dynamics(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Bohan; Zheng, Mianlun; Barbic, Jernej; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LuePhysically based simulation is often combined with geometric mesh animation to add realistic soft-body dynamics to virtual characters. This is commonly done using constraint-based simulation whereby a soft-tissue simulation is constrained to geometric animation of a subpart (or otherwise proxy representation) of the character. We observe that standard constraint-based simulation suffers from an important flaw that limits the expressiveness of soft-body dynamics. Namely, under correct physics, the frequency and amplitude of soft-tissue dynamics arising from constraints (''inertial amplitude'') are coupled, and cannot be adjusted independently merely by adjusting the material properties of the model. This means that the space of physically based simulations is inherently limited and cannot capture all effects typically expected by computer animators. For example, animators need the ability to adjust the frequency, inertial amplitude, gravity sag and damping properties of the virtual character, independently from each other, as these are the primary visual characteristics of the soft-tissue dynamics. We demonstrate that independence can be achieved by transforming the equations of motion into a non-inertial reference coordinate frame, then scaling the resulting inertial forces, and then converting the equations of motion back to the inertial frame. Such scaling of inertia makes it possible for the animator to set the character's inertial amplitude independently from frequency. We also provide exact controls for the amount of character's gravity sag, and the damping properties. In our examples, we use linear blend skinning and pose-space deformation for geometric mesh animation, and the Finite Element Method for soft-body constrained simulation; but our idea of scaling inertial forces is general and applicable to other animation and simulation methods. We demonstrate our technique on several character examples.Item Automatic Band-Limited Approximation of Shaders Using Mean-Variance Statistics in Clamped Domain(The Eurographics Association and John Wiley & Sons Ltd., 2020) Li, Shi; Wang, Rui; Huo, Yuchi; Zheng, Wenting; Hua, Wei; Bao, Hujun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueIn this paper, we present a new shader smoothing method to improve the quality and generality of band-limiting shader programs. Previous work [YB18] treats intermediate values in the program as random variables, and utilizes mean and variance statistics to smooth shader programs. In this work, we extend such a band-limiting framework by exploring the observation that one intermediate value in the program is usually computed by a complex composition of functions, where the domain and range of composited functions heavily impact the statistics of smoothed programs. Accordingly, we propose three new shader smoothing rules for specific composition of functions by considering the domain and range, enabling better mean and variance statistics of approximations. Aside from continuous functions, the texture, such as color texture or normal map, is treated as a discrete function with limited domain and range, thereby can be processed similarly in the newly proposed framework. Experiments show that compared with previous work, our method is capable of generating better smoothness of shader programs as well as handling a broader set of shader programs.Item A Bayesian Inference Framework for Procedural Material Parameter Estimation(The Eurographics Association and John Wiley & Sons Ltd., 2020) Guo, Yu; Hasan, Milos; Yan, Lingqi; Zhao, Shuang; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueProcedural material models have been gaining traction in many applications thanks to their flexibility, compactness, and easy editability. We explore the inverse rendering problem of procedural material parameter estimation from photographs, presenting a unified view of the problem in a Bayesian framework. In addition to computing point estimates of the parameters by optimization, our framework uses a Markov Chain Monte Carlo approach to sample the space of plausible material parameters, providing a collection of plausible matches that a user can choose from, and efficiently handling both discrete and continuous model parameters. To demonstrate the effectiveness of our framework, we fit procedural models of a range of materials-wall plaster, leather, wood, anisotropic brushed metals and layered metallic paints-to both synthetic and real target images.Item CLA-GAN: A Context and Lightness Aware Generative Adversarial Network for Shadow Removal(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Ling; Long, Chengjiang; Yan, Qingan; Zhang, Xiaolong; Xiao, Chunxia; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueIn this paper, we propose a novel context and lightness aware Generative Adversarial Network (CLA-GAN) framework for shadow removal, which refines a coarse result to a final shadow removal result in a coarse-to-fine fashion. At the refinement stage, we first obtain a lightness map using an encoder-decoder structure. With the lightness map and the coarse result as the inputs, the following encoder-decoder tries to refine the final result. Specifically, different from current methods restricted pixel-based features from shadow images, we embed a context-aware module into the refinement stage, which exploits patch-based features. The embedded module transfers features from non-shadow regions to shadow regions to ensure the consistency in appearance in the recovered shadow-free images. Since we consider pathces, the module can additionally enhance the spatial association and continuity around neighboring pixels. To make the model pay more attention to shadow regions during training, we use dynamic weights in the loss function. Moreover, we augment the inputs of the discriminator by rotating images in different degrees and use rotation adversarial loss during training, which can make the discriminator more stable and robust. Extensive experiments demonstrate the validity of the components in our CLA-GAN framework. Quantitative evaluation on different shadow datasets clearly shows the advantages of our CLA-GAN over the state-of-the-art methods.Item Coarse to Fine:Weak Feature Boosting Network for Salient Object Detection(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Chenhao; Gao, Shanshan; Pan, Xiao; Wang, Yuting; Zhou, Yuanfeng; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueSalient object detection is to identify objects or regions with maximum visual recognition in an image, which brings significant help and improvement to many computer visual processing tasks. Although lots of methods have occurred for salient object detection, the problem is still not perfectly solved especially when the background scene is complex or the salient object is small. In this paper, we propose a novel Weak Feature Boosting Network (WFBNet) for the salient object detection task. In the WFBNet, we extract the unpredictable regions (low confidence regions) of the image via a polynomial function and enhance the features of these regions through a well-designed weak feature boosting module (WFBM). Starting from a coarse saliency map, we gradually refine it according to the boosted features to obtain the final saliency map, and our network does not need any post-processing step. We conduct extensive experiments on five benchmark datasets using comprehensive evaluation metrics. The results show that our algorithm has considerable advantages over the existing state-of-the-art methods.Item Colorization of Line Drawings with Empty Pupils(The Eurographics Association and John Wiley & Sons Ltd., 2020) Akita, Kenta; Morimoto, Yuki; Tsuruno, Reiji; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueMany studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the convolutional neural network are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically colorizing eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our pupil position estimation network.Item Computing the Bidirectional Scattering of a Microstructure Using Scalar Diffraction Theory and Path Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2020) Falster, Viggo; Jarabo, Adrián; Frisvad, Jeppe Revall; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueMost models for bidirectional surface scattering by arbitrary explicitly defined microgeometry are either based on geometric optics and include multiple scattering but no diffraction effects or based on wave optics and include diffraction but no multiple scattering effects. The few exceptions to this tendency are based on rigorous solution of Maxwell's equations and are computationally intractable for surface microgeometries that are tens or hundreds of microns wide. We set up a measurement equation for combining results from single scattering scalar diffraction theory with multiple scattering geometric optics using Monte Carlo integration. Since we consider an arbitrary surface microgeometry, our method enables us to compute expected bidirectional scattering of the metasurfaces with increasingly smaller details seen more and more often in production. In addition, we can take a measured microstructure as input and, for example, compute the difference in bidirectional scattering between a desired surface and a produced surface. In effect, our model can account for both diffraction colors due to wavelength-sized features in the microgeometry and brightening due to multiple scattering. We include scalar diffraction for refraction, and we verify that our model is reasonable by comparing with the rigorous solution for a microsurface with half ellipsoids.Item Cosserat Rod with rh-Adaptive Discretization(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wen, Jiahao; Chen, Jiong; Nobuyuki, Umetani; Bao, Hujun; Huang, Jin; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueRod-like one-dimensional elastic objects often exhibit complex behaviors which pose great challenges to the discretization method for pursuing a faithful simulation. By only moving a small portion of material points, the Eulerian-on-Lagrangian (EoL) method already shows great adaptivity to handle sharp contact, but it is still far from enough to reproduce rich and complex geometry details arising in simulations. In this paper, we extend the discrete configuration space by unifying all Lagrangian and EoL nodes in representation for even more adaptivity with every sample being assigned with a dynamic material coordinate. However, this great extension will immediately bring in much more redundancy in the dynamic system. Therefore, we propose additional energy to control the spatial distribution of all material points, seeking to equally space them with respect to a curvature-based density field as a monitor. This flexible approach can effectively constrain the motion of material points to resolve numerical degeneracy, while simultaneously enables them to notably slide inside the parametric domain to account for the shape parameterization. Besides, to accurately respond to sharp contact, our method can also insert or remove nodes online and adjust the energy stiffness to suppress possible jittering artifacts that could be excited in a stiff system. As a result of this hybrid rh-adaption, our proposed method is capable of reproducing many realistic rod dynamics, such as excessive bending, twisting and knotting while only using a limited number of elements.Item A Deep Residual Network for Geometric Decontouring(The Eurographics Association and John Wiley & Sons Ltd., 2020) Ji, Zhongping; Zhou, Chengqin; Zhang, Qiankan; Zhang, Yu-Wei; Wang, Wenping; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueGrayscale images are intensively used to construct or represent geometric details in field of computer graphics. In practice, displacement mapping technique often allows an 8-bit grayscale image input to manipulate the position of vertices. Human eyes are insensitive to the change of intensity between consecutive gray levels, so a grayscale image only provides 256 levels of luminances. However, when the luminances are converted into geometric elements, certain artifacts such as false contours become obvious. In this paper, we formulate the geometric decontouring as a constrained optimization problem from a geometric perspective. Instead of directly solving this optimization problem, we propose a data-driven method to learn a residual mapping function. We design a Geometric DeContouring Network (GDCNet) to eliminate the false contours effectively. To this end, we adopt a ResNet-based network structure and a normal-based loss function. Extensive experimental results demonstrate that accurate reconstructions can be achieved effectively. Our method can be used as a relief compressed representation and enhance the traditional displacement mapping technique to augment 3D models with high-quality geometric details using grayscale images efficiently.Item Deep Separation of Direct and Global Components from a Single Photograph under Structured Lighting(The Eurographics Association and John Wiley & Sons Ltd., 2020) Duan, Zhaoliang; Bieron, James; Peers, Pieter; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueWe present a deep learning based solution for separating the direct and global light transport components from a single photograph captured under high frequency structured lighting with a co-axial projector-camera setup. We employ an architecture with one encoder and two decoders that shares information between the encoder and the decoders, as well as between both decoders to ensure a consistent decomposition between both light transport components. Furthermore, our deep learning separation approach does not require binary structured illumination, allowing us to utilize the full resolution capabilities of the projector. Consequently, our deep separation network is able to achieve high fidelity decompositions for lighting frequency sensitive features such as subsurface scattering and specular reflections. We evaluate and demonstrate our direct and global separation method on a wide variety of synthetic and captured scenes.Item Diversifying Semantic Image Synthesis and Editing via Class- and Layer-wise VAEs(The Eurographics Association and John Wiley & Sons Ltd., 2020) Endo, Yuki; Kanamori, Yoshihiro; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueSemantic image synthesis is a process for generating photorealistic images from a single semantic mask. To enrich the diversity of multimodal image synthesis, previous methods have controlled the global appearance of an output image by learning a single latent space. However, a single latent code is often insufficient for capturing various object styles because object appearance depends on multiple factors. To handle individual factors that determine object styles, we propose a class- and layer-wise extension to the variational autoencoder (VAE) framework that allows flexible control over each object class at the local to global levels by learning multiple latent spaces. Furthermore, we demonstrate that our method generates images that are both plausible and more diverse compared to state-of-the-art methods via extensive experiments with real and synthetic datasets in three different domains. We also show that our method enables a wide range of applications in image synthesis and editing tasks.Item FAKIR: An Algorithm for Revealing the Anatomy and Pose of Statues from Raw Point Sets(The Eurographics Association and John Wiley & Sons Ltd., 2020) Fu, Tong; Chaine, Raphaelle; Digne, Julie; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue3D acquisition of archaeological artefacts has become an essential part of cultural heritage research for preservation or restoration purpose. Statues, in particular, have been at the center of many projects. In this paper, we introduce a way to improve the understanding of acquired statues representing real or imaginary creatures by registering a simple and pliable articulated model to the raw point set data. Our approach performs a Forward And bacKward Iterative Registration (FAKIR) which proceeds joint by joint, needing only a few iterations to converge. We are thus able to detect the pose and elementary anatomy of sculptures, with possibly non realistic body proportions. By adapting our simple skeleton, our method can work on animals and imaginary creatures.Item Fast Out-of-Core Octree Generation for Massive Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2020) Schütz, Markus; Ohrhallinger, Stefan; Wimmer, Michael; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueWe propose an efficient out-of-core octree generation method for arbitrarily large point clouds. It utilizes a hierarchical counting sort to quickly split the point cloud into small chunks, which are then processed in parallel. Levels of detail are generated by subsampling the full data set bottom up using one of multiple exchangeable sampling strategies.We introduce a fast hierarchical approximate blue-noise strategy and compare it to a uniform random sampling strategy. The throughput, including out-of-core access to disk, generating the octree, and writing the final result to disk, is about an order of magnitude faster than the state of the art, and reaches up to around 6 million points per second for the blue-noise approach and up to around 9 million points per second for the uniform random approach on modern SSDs.Item Fracture Patterns Design for Anisotropic Models with the Material Point Method(The Eurographics Association and John Wiley & Sons Ltd., 2020) Cao, Wei; Lyu, Luan; Ren, Xiaohua; Zhang, Bob; Yang, Zhixin; Wu, Enhua; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LuePhysically plausible fracture animation is a challenging topic in computer graphics. Most of the existing approaches focus on the fracture of isotropic materials. We proposed a frame-field method for the design of anisotropic brittle fracture patterns. In this case, the material anisotropy is determined by two parts: anisotropic elastic deformation and anisotropic damage mechanics. For the elastic deformation, we reformulate the constitutive model of hyperelastic materials to achieve anisotropy by adding additional energy density functions in particular directions. For the damage evolution, we propose an improved phasefield fracture method to simulate the anisotropy by designing a deformation-aware second-order structural tensor. These two parts can present elastic anisotropy and fractured anisotropy independently, or they can be well coupled together to exhibit rich crack effects. To ensure the flexibility of simulation, we further introduce a frame-field concept to assist in setting local anisotropy, similar to the fiber orientation of textiles. For the discretization of the deformable object, we adopt a novel Material Point Method(MPM) according to its fracture-friendly nature. We also give some design criteria for anisotropic models through comparative analysis. Experiments show that our anisotropic method is able to be well integrated with the MPM scheme for simulating the dynamic fracture behavior of anisotropic materials.Item Generating High-quality Superpixels in Textured Images(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Zhe; Xu, Panpan; Chang, Jian; Wang, Wencheng; Zhao, Chong; Zhang, Jian Jun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueSuperpixel segmentation is important for promoting various image processing tasks. However, existing methods still have difficulties in generating high-quality superpixels in textured images, because they cannot separate textures from structures well. Though texture filtering can be adopted for smoothing textures before superpixel segmentation, the filtering would also smooth the object boundaries, and thus weaken the quality of generated superpixels. In this paper, we propose to use the adaptive scale box smoothing instead of the texture filtering to obtain more high-quality texture and boundary information. Based on this, we design a novel distance metric to measure the distance between different pixels, which considers boundary, color and Euclidean distance simultaneously. As a result, our method can achieve high-quality superpixel segmentation in textured images without texture filtering. The experimental results demonstrate the superiority of our method over existing methods, even the learning-based methods. Benefited from using boundaries to guide superpixel segmentation, our method can also suppress noise to generate high-quality superpixels in non-textured images.Item A Graph-based One-Shot Learning Method for Point Cloud Recognition(The Eurographics Association and John Wiley & Sons Ltd., 2020) Fan, Zhaoxin; Liu, Hongyan; He, Jun; Sun, Qi; Du, Xiaoyong; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LuePoint cloud based 3D vision tasks, such as 3D object recognition, are critical to many real world applications such as autonomous driving. Many point cloud processing models based on deep learning have been proposed by researchers recently. However, they are all large-sample dependent, which means that a large amount of manually labelled training data are needed to train the model, resulting in huge labor cost. In this paper, to tackle this problem, we propose a One-Shot learning model for Point Cloud Recognition, namely OS-PCR. Different from previous methods, our method formulates a new setting, where the model only needs to see one sample per class once for memorizing at inference time when new classes are needed to be recognized. To fulfill this task, we design three modules in the model: an Encoder Module, an Edge-conditioned Graph Convolutional Network Module, and a Query Module. To evaluate the performance of the proposed model, we build a one-shot learning benchmark dataset for 3D point cloud analysis. Then, comprehensive experiments are conducted on it to demonstrate the effectiveness of our proposed model.Item Human Pose Transfer by Adaptive Hierarchical Deformation(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Jinsong; Liu, Xingzi; Li, Kun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueHuman pose transfer, as a misaligned image generation task, is very challenging. Existing methods cannot effectively utilize the input information, which often fail to preserve the style and shape of hair and clothes. In this paper, we propose an adaptive human pose transfer network with two hierarchical deformation levels. The first level generates human semantic parsing aligned with the target pose, and the second level generates the final textured person image in the target pose with the semantic guidance. To avoid the drawback of vanilla convolution that treats all the pixels as valid information, we use gated convolution in both two levels to dynamically select the important features and adaptively deform the image layer by layer. Our model has very few parameters and is fast to converge. Experimental results demonstrate that our model achieves better performance with more consistent hair, face and clothes with fewer parameters than state-of-the-art methods. Furthermore, our method can be applied to clothing texture transfer. The code is available for research purposes at https://github.com/Zhangjinso/PINet_PG.Item Image-Driven Furniture Style for Interactive 3D Scene Modeling(The Eurographics Association and John Wiley & Sons Ltd., 2020) Weiss, Tomer; Yildiz, Ilkay; Agarwal, Nitin; Ataer-Cansizoglu, Esra; Choi, Jae-Woo; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueCreating realistic styled spaces is a complex task, which involves design know-how for what furniture pieces go well together. Interior style follows abstract rules involving color, geometry and other visual elements. Following such rules, users manually select similar-style items from large repositories of 3D furniture models, a process which is both laborious and time-consuming. We propose a method for fast-tracking style-similarity tasks, by learning a furniture's style-compatibility from interior scene images. Such images contain more style information than images depicting single furniture. To understand style, we train a deep learning network on a classification task. Based on image embeddings extracted from our network, we measure stylistic compatibility of furniture. We demonstrate our method with several 3D model style-compatibility results, and with an interactive system for modeling style-consistent scenes.Item InstanceFusion: Real-time Instance-level 3D Reconstruction Using a Single RGBD Camera(The Eurographics Association and John Wiley & Sons Ltd., 2020) Lu, Feixiang; Peng, Haotian; Wu, Hongyu; Yang, Jun; Yang, Xinhang; Cao, Ruizhi; Zhang, Liangjun; Yang, Ruigang; Zhou, Bin; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueWe present InstanceFusion, a robust real-time system to detect, segment, and reconstruct instance-level 3D objects of indoor scenes with a hand-held RGBD camera. It combines the strengths of deep learning and traditional SLAM techniques to produce visually compelling 3D semantic models. The key success comes from our novel segmentation scheme and the efficient instancelevel data fusion, which are both implemented on GPU. Specifically, for each incoming RGBD frame, we take the advantages of the RGBD features, the 3D point cloud, and the reconstructed model to perform instance-level segmentation. The corresponding RGBD data along with the instance ID are then fused to the surfel-based models. In order to sufficiently store and update these data, we design and implement a new data structure using the OpenGL Shading Language. Experimental results show that our method advances the state-of-the-art (SOTA) methods in instance segmentation and data fusion by a big margin. In addition, our instance segmentation improves the precision of 3D reconstruction, especially in the loop closure. InstanceFusion system runs 20.5Hz on a consumer-level GPU, which supports a number of augmented reality (AR) applications (e.g., 3D model registration, virtual interaction, AR map) and robot applications (e.g., navigation, manipulation, grasping). To facilitate future research and reproduce our system more easily, the source code, data, and the trained model are released on Github: https://github.com/Fancomi2017/InstanceFusion.Item Interactive Design and Preview of Colored Snapshots of Indoor Scenes(The Eurographics Association and John Wiley & Sons Ltd., 2020) Fu, Qiang; Yan, Hai; Fu, Hongbo; Li, Xueming; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueThis paper presents an interactive system for quickly designing and previewing colored snapshots of indoor scenes. Different from high-quality 3D indoor scene rendering, which often takes several minutes to render a moderately complicated scene under a specific color theme with high-performance computing devices, our system aims at improving the effectiveness of color theme design of indoor scenes and employs an image colorization approach to efficiently obtain high-resolution snapshots with editable colors. Given several pre-rendered, multi-layer, gray images of the same indoor scene snapshot, our system is designed to colorize and merge them into a single colored snapshot. Our system also assists users in assigning colors to certain objects/components and infers more harmonious colors for the unassigned objects based on pre-collected priors to guide the colorization. The quickly generated snapshots of indoor scenes provide previews of interior design schemes with different color themes, making it easy to determine the personalized design of indoor scenes. To demonstrate the usability and effectiveness of this system, we present a series of experimental results on indoor scenes of different types, and compare our method with a state-of-the-art method for indoor scene material and color suggestion and offline/online rendering software packages.