PG2022 Short Papers, Posters, and Work-in-Progress Papers
Permanent URI for this collection
Browse
Browsing PG2022 Short Papers, Posters, and Work-in-Progress Papers by Title
Now showing 1 - 14 of 14
Results Per Page
Sort Options
Item Adaptive and Dynamic Regularization for Rolling Guidance Image Filtering(The Eurographics Association, 2022) Fukatsu, Miku; Yoshizawa, Shin; Takemura, Hiroshi; Yokota, Hideo; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakSeparating shapes and textures of digital images at different scales is useful in computer graphics. The Rolling Guidance (RG) filter, which removes structures smaller than a specified scale while preserving salient edges, has attracted considerable attention. Conventional RG-based filters have some drawbacks, including smoothness/sharpness quality dependence on scale and non-uniform convergence. This paper proposes a novel RG-based image filter that has more stable filtering quality at varying scales. Our filtering approach is an adaptive and dynamic regularization for a recursive regression model in the RG framework to produce more edge saliency and appropriate scale convergence. Our numerical experiments demonstrated filtering results with uniform convergence and high accuracy for varying scales.Item Aesthetic Enhancement via Color Area and Location Awareness(The Eurographics Association, 2022) Yang, Bailin; Wang, Qingxu; Li, Frederick W. B.; Liang, Xiaohui; Wei, Tianxiang; Zhu, Changrui; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakChoosing a suitable color palette can typically improve image aesthetic, where a naive way is choosing harmonious colors from some pre-defined color combinations in color wheels. However, color palettes only consider the usage of color types without specifying their amount in an image. Also, it is still challenging to automatically assign individual palette colors to suitable image regions for maximizing image aesthetic quality. Motivated by these, we propose to construct a contribution-aware color palette from images with high aesthetic quality, enabling color transfer by matching the coloring and regional characteristics of an input image. We hence exploit public image datasets, extracting color composition and embedded color contribution features from aesthetic images to generate our proposed color palettes. We consider both image area ratio and image location as the color contribution features to extract. We have conducted quantitative experiments to demonstrate that our method outperforms existing methods through SSIM (Structural SIMilarity) and PSNR (Peak Signal to Noise Ratio) for objective image quality measurement and no-reference image assessment (NIMA) for image aesthetic scoring.Item DARC: A Visual Analytics System for Multivariate Applicant Data Aggregation, Reasoning and Comparison(The Eurographics Association, 2022) Hou, Yihan; Liu, Yu; Wang, He; Zhang, Zhichao; Li, Yue; Liang, Hai-Ning; Yu, Lingyun; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakPeople often make decisions based on their comprehensive understanding of various materials, judgement of reasons, and comparison among choices. For instance, when hiring committees review multivariate applicant data, they need to consider and compare different aspects of the applicants' materials. However, the amount and complexity of multivariate data increase the difficulty to analyze the data, extract the most salient information, and then rapidly form opinions based on the extracted information. Thus, a fast and comprehensive understanding of multivariate data sets is a pressing need in many fields, such as business and education. In this work, we had in-depth interviews with stakeholders and characterized user requirements involved in data-driven decision making in reviewing school applications. Based on these requirements, we propose DARC, a visual analytics system for facilitating decision making on multivariate applicant data. Through the system, users are supported to gain insights of the multivariate data, picture an overview of all data cases, and retrieve original data in a quick and intuitive manner. The effectiveness of DARC is validated through observational user evaluations and interviews.Item DFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology(The Eurographics Association, 2022) Jiang, Diqiong; You, Lihua; Chang, Jian; Tong, Ruofeng; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakHigh-quality and personalized digital human faces have been widely used in media and entertainment, from film and game production to virtual reality. However, the existing technology of generating digital faces requires extremely intensive labor, which prevents the large-scale popularization of digital face technology. In order to tackle this problem, the proposed research will investigate deep learning-based facial modeling and animation technologies to 1) create personalized face geometry from a single image, including the recognizable neutral face shape and believable personalized blendshapes; (2) generate personalized production-level facial skin textures from a video or image sequence; (3) automatically drive and animate a 3D target avatar by an actor's 2D facial video or audio. Our innovation is to achieve these tasks both efficiently and precisely by using the end-to-end framework with modern deep learning technology (StyleGAN, Transformer, NeRF).Item Human Face Modeling based on Deep Learning through Line-drawing(The Eurographics Association, 2022) Kawanaka, Yuta; Sato, Syuhei; Sakurai, Kaisei; Gao, Shangce; Tang, Zheng; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakThis paper presents a deep learning-based method for creating 3D human face models. In recent years, several sketch-based shape modeling methods have been proposed. These methods allow the user to easily model various shapes containing animal, building, vehicle, and so on. However, a few methods have been proposed for human face models. If we can create 3D human face models via line-drawing, models of cartoon or fantasy characters can be easily created. To achieve this, we propose a sketch-based face modeling method. When a single line-drawing image is input to our system, a corresponding 3D face model are generated. Our system is based on a deep learning; many human face models and corresponding images rendered as line-drawing are prepared, and then a network is trained using these datasets. For the network, we use a previous method for reconstructing human bodies from real images, and we propose some extensions to enhance learning accuracy. Several examples are shown to demonstrate usefulness of our system.Item Improving View Independent Rendering for Multiview Effects(The Eurographics Association, 2022) Gavane, Ajinkya; Watson, Benjamin; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakThis paper describes improvements to view independent rendering (VIR) that make it much more useful for multiview effects. Improved VIR's (iVIR's) soft shadows are nearly identical in quality to VIR's and produced with comparable speed (several times faster than multipass rendering), even when using a simpler bufferless implementation that does not risk overflow. iVIR's omnidirectional shadow results are still better, often nearly twice as fast as VIR's, even when bufferless. Most impressively, iVIR enables complex environment mapping in real time, producing high-quality reflections up to an order of magnitude faster than VIR, and 2-4 times faster than multipass rendering.Item Interactive Deformable Image Registration with Dual Cursor(The Eurographics Association, 2022) Igarashi, Takeo; Koike, Tsukasa; Kin, Taichi; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakDeformable image registration is the process of deforming a target image to match corresponding features of a reference image. Fully automatic registration remains difficult; thus, manual registration is dominant in practice. In manual registration, an expert user specifies a set of paired landmarks on the two images; subsequently, the system deforms the target image to match each landmark with its counterpart as a batch process. However, the deformation results are difficult for the user to predict, and moving the cursor back and forth between the two images is time-consuming. To improve the efficiency of this manual process, we propose an interactive method wherein the deformation results are continuously displayed as the user clicks and drags each landmark. Additionally, the system displays two cursors, one on the target image and the other on the reference image, to reduce the amount of mouse movement required. The results of a user study reveal that the proposed interactive method achieves higher accuracy and faster task completion compared to traditional batch landmark placement.Item An Interactive Modeling System of Japanese Castles with Decorative Objects(The Eurographics Association, 2022) Umeyama, Shogo; Dobashi, Yoshinori; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakWe present an interactive modeling system for Japanese castles. We develop an user interface that can generate the fundamental structure of the castle tower consisting of stone walls, turrets, and roofs. By clicking on the screen with a mouse, relevant parameters for the fundamental structure are automatically calculated to generate 3D models of Japanese-style castles. We use characteristic curves that often appear in ancient Japanese architecture for the realistic modeling of the castles.Item Intersection Distance Field Collision for GPU(The Eurographics Association, 2022) Krayer, Bastian; Görge, Rebekka; Müller, Stefan; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakWe present a framework for finding collision points between objects represented by signed distance fields. Particles are used to sample the region where intersections can occur. The distance field representation is used to project the particles onto the surface of the intersection of both objects. From there information, such as collision normals and intersection depth can be extracted. This allows for handling various types of objects in a unified way. Due to the particle approach, the algorithm is well suited to the GPU.Item Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models(The Eurographics Association, 2022) Wang, Zeyu; Wang, Tuanfeng Y.; Dorsey, Julie; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakMost non-photorealistic rendering (NPR) methods for line drawing synthesis operate on a static shape. They are not tailored to process animated 3D models due to extensive per-frame parameter tuning needed to achieve the intended look and natural transition. This paper introduces a framework for interactive line drawing synthesis from animated 3D models based on a learned style space for drawing representation and interpolation. We refer to style as the relationship between stroke placement in a line drawing and its corresponding geometric properties. Starting from a given sequence of an animated 3D character, a user creates drawings for a set of keyframes. Our system embeds the raster drawings into a latent style space after they are disentangled from the underlying geometry. By traversing the latent space, our system enables a smooth transition between the input keyframes. The user may also edit, add, or remove the keyframes interactively, similar to a typical keyframe-based workflow. We implement our system with deep neural networks trained on synthetic line drawings produced by a combination of NPR methods. Our drawing-specific supervision and optimization-based embedding mechanism allow generalization from NPR line drawings to user-created drawings during run time. Experiments show that our approach generates high-quality line drawing animations while allowing interactive control of the drawing style across frames.Item Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism(The Eurographics Association, 2022) Ling, Peng; Mo, Haoran; Gao, Chengying; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakScene sketch segmentation based on referring expression plays an important role in sketch editing of anime industry. While most existing referring image segmentation approaches are designed for the standard task of generating a binary segmentation mask for a single or a group of target(s), we think it necessary to equip these models with the ability of multi-instance segmentation. To this end, we propose GRM-Net, a one-stage framework tailored for multi-instance referring image segmentation of scene sketches. We extract the language features from the expression and fuse it into a conventional instance segmentation pipeline for filtering out the undesired instances in a coarse-to-fine manner and keeping the matched ones. To model the relative arrangement of the objects and the relationship among them from a global view, we propose a global reference mechanism (GRM) to assign references to each detected candidate to identify its position. We compare with existing methods designed for multi-instance referring image segmentation of scene sketches and for the standard task of referring image segmentation, and the results demonstrate the effectiveness and superiority of our approach.Item Pacific Graphics 2022 - Short Papers, Posters, and Work-in-Progress Papers: Frontmatter(The Eurographics Association, 2022) Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakItem Reconstructing Bounding Volume Hierarchies from Memory Traces of Ray Tracers(The Eurographics Association, 2022) Buelow, Max von; Stensbeck, Tobias; Knauthe, Volker; Guthe, Stefan; Fellner, Dieter W.; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakThe ongoing race to improve computer graphics leads to more complex GPU hardware and ray tracing techniques whose internal functionality is sometimes hidden to the user. Bounding volume hierarchies and their construction are an important performance aspect of such ray tracing implementations. We propose a novel approach that utilizes binary instrumentation to collect memory traces and then uses them to extract the bounding volume hierarchy (BVH) by analyzing access patters. Our reconstruction allows combining memory traces captured from multiple ray tracing views independently, increasing the reconstruction result. It reaches accuracies of 30% to 45% when comparing against the ground-truth BVH used for ray tracing a single view on a simple scene with one object. With multiple views it is even possible to reconstruct the whole BVH, while we already achieve 98% with just seven views. Because our approach is largely independent of the data structures used internally, these accurate reconstructions serve as a first step into estimation of unknown construction techniques of ray tracing implementations.Item Shadow Removal via Cascade Large Mask Inpainting(The Eurographics Association, 2022) Kim, Juwan; Kim, Seung-Heon; Jang, Insung; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakWe present a novel shadow removal framework based on the image inpainting approach. The proposed method consists of two cascade Large-Mask inpainting(LaMa) networks for shadow inpainting and edge inpainting. Experiments with the ISTD and adjusted ISTD dataset show that our method achieves competitive shadow removal results compared to state-of-the methods. And we also show that shadows are well removed from complex and large shadow images, such as urban aerial images.