PG2020 Short Papers, Posters, and Work-in-Progress Papers
Permanent URI for this collection
Browse
Browsing PG2020 Short Papers, Posters, and Work-in-Progress Papers by Title
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item Day-to-Night Road Scene Image Translation Using Semantic Segmentation(The Eurographics Association, 2020) Baek, Seung Youp; Lee, Sungkil; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardWe present a semi-automated framework that translates day-time domain road scene images to those for the night-time domain. Unlike recent studies based on the Generative Adversarial Networks (GANs), we avoid learning for the translation without random failures. Our framework uses semantic annotation to extract scene elements, perceives a scene structure/depth, and applies per-element translation. Experimental results demonstrate that our framework can synthesize higher-resolution results without artifacts in the translation.Item A Deep Learning Based Interactive Sketching System for Fashion Images Design(The Eurographics Association, 2020) Li, Yao; Yu, Xiang Gang; Han, Xiao Guang; Jiang, Nian Juan; Jia, Kui; Lu, Jiang Bo; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardIn this work, we propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information. The major challenge behind this system is to generate high-quality and detailed texture according to the user-provided texture information. Prior works mainly use the texture patch representation and try to map a small texture patch to a whole garment image, hence unable to generate high-quality details. In contrast, inspired by intrinsic image decomposition, we decompose this task into texture synthesis and shading enhancement. In particular, we propose a novel bi-colored edge texture representation to synthesize textured garment images and a shading enhancer to render shading based on the grayscale edges. The bi-colored edge representation provides simple but effective texture cues and color constraints, so that the details can be better reconstructed. Moreover, with the rendered shading, the synthesized garment image becomes more vivid.Item An Energy-Conserving Hair Shading Model Based on Neural Style Transfer(The Eurographics Association, 2020) Qiao, Zhi; Kanai, Takashi; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardWe present a novel approach for shading photorealistic hair animation, which is the essential visual element for depicting realistic hairs of virtual characters. Our model is able to shade high-quality hairs quickly by extending the conditional Generative Adversarial Networks. Furthermore, our method is much faster than the previous onerous rendering algorithms and produces fewer artifacts than other neural image translation methods. In this work, we provide a novel energy-conserving hair shading model, which retains the vast majority of semi-transparent appearances and exactly produces the interaction with lights of the scene. Our method is effortless to implement, faster and computationally more efficient than previous algorithms.Item Illumination Space: A Feature Space for Radiance Maps(The Eurographics Association, 2020) Chalmers, Andrew; Zickler, Todd; Rhee, Taehyun; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardRadiance maps (RM) are used for capturing the lighting properties of real-world environments. Databases of RMs are useful for various rendering applications such as Look Development, live action composition, mixed reality, and machine learning. Such databases are not useful if they cannot be organized in a meaningful way. To address this, we introduce the illumination space, a feature space that arranges RM databases based on illumination properties. We avoid manual labeling by automatically extracting features from an RM that provides a concise and semantically meaningful representation of its typical lighting effects. This is made possible with the following contributions: a method to automatically extract a small set of dominant and ambient lighting properties from RMs, and a low-dimensional (5D) light feature vector summarizing these properties to form the illumination space. Our method is motivated by how the RM illuminates the scene as opposed to describing the textural content of the RM.Item Interactive Video Completion with SiamMask(The Eurographics Association, 2020) Tsubota, Satsuki; Okabe, Makoto; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardIn this project, we are developing a method to quickly and easily perform video completion. Under the proposed method, the user specifies a target object by drawing a bounding box around it in the first frame of the video; this bounding box is taken as input by SiamMask. SiamMask then tracks the target object and produces its mask in each frame. The resulting masks are then taken as input by Interactive Video Completion, which produces the final video completion result. SiamMask and Interactive Video Completion take several seconds to process an 80-frame video at a pixel resolution of 854x480, i.e., these methods are computationally efficient. The goal of this project is that, after drawing the bounding box, the user immediately obtains the video completion result. However, the mask produced by our current method is not always perfect. When imperfections arise, the user still has to manually modify the mask using an image editing software. We want to improve the quality of automatically produced masks and further reduce the burden of manual modification on the user in the near future.Item Monocular 3D Fluid Volume Reconstruction Based on a Multilayer External Force Guiding Model(The Eurographics Association, 2020) Su, Zhiyuan; Nie, Xiaoying; Shen, Xukun; Hu, Yong; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardIn this paper, we present a monocular 3D fluid volume reconstruction technique that can alleviate challenging parameter tuning while vividly reproducing the inflow and outflow of the video scene. To reconstruct the geometric appearance and 3D motion of the fluid in the video, we propose a multilayer external force guiding model that formulates the effect of target particles on fluid particles. This multilayer model makes the whole 3D fluid volume subject to the shape and motion of the water captured by the input video, so we can avoid tedious and laborious parameter tuning and easily balance the smoothness of the fluid volume and the details of the water surface. Besides, for the inflow and outflow of the 3D fluid volume, we construct a generation and extinction model to add or delete fluid particles according to the 3D velocity field of target particles calculated by a hybrid model of coupling SfS with optical flow. Experiments show that our method compares favorably to the state-of-the-art in terms of reconstruction quality, and is more general to the real-captured fluid. Furthermore, the reconstructed 3D fluid volume can be effectively applied to any desired new scenario.Item Pacific Graphics 2020 - Short Papers, Posters, and Work-in-Progress Papers: Frontmatter(The Eurographics Association, 2020) Lee, Sung-hee; Zollmann, Stefanie; Okabe, Makoto; Wuensche, Burkhard; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardItem Reconstructing Monte Carlo Errors as a Blue-noise in Screen Space(The Eurographics Association, 2020) Liu, Hongli; Han, Honglei; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardWe present a novel method that reconstructs the Monte Carlo errors of renderings as a blue-noise in screen space. To this end, we conform the statistic result of per-pixel integration to a precomputed blue-noise mask. Thanks to the property of bluenoise, more visual fidelity is achieved through renderings after the reconstruction. The method has two key features. First, its realization is fast and straightforward. Second, it produces stable blue-noise-error renderings regardless of the correlation of the integrands. The preliminary results present robust blue-noise spectra with promising visual improvements in the renderings.Item A Robust Feature-aware Sparse Mesh Representation(The Eurographics Association, 2020) Fuentes Perez, Lizeth Joseline; Romero Calla, Luciano Arnaldo; Montenegro, Anselmo Antunes; Mura, Claudio; Pajarola, Renato; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardThe sparse representation of signals defined on Euclidean domains has been successfully applied in signal processing. Bringing the power of sparse representations to non-regular domains is still a challenge, but promising approaches have started emerging recently. In this paper, we investigate the problem of sparsely representing discrete surfaces and propose a new representation that is capable of providing tools for solving different geometry processing problems. The sparse discrete surface representation is obtained by combining innovative approaches into an integrated method. First, to deal with irregular mesh domains, we devised a new way to subdivide discrete meshes into a set of patches using a feature-aware seed sampling. Second, we achieve good surface approximation with over-fitting control by combining the power of a continuous global dictionary representation with a modified Orthogonal Marching Pursuit. The discrete surface approximation results produced were able to preserve the shape features while being robust to over-fitting. Our results show that the method is quite promising for applications like surface re-sampling and mesh compression.Item RTSDF: Generating Signed Distance Fields in Real Time for Soft Shadow Rendering(The Eurographics Association, 2020) Tan, Yu Wei; Chua, Nicholas; Koh, Clarence; Bhojan, Anand; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardSigned Distance Fields (SDFs) for surface representation are commonly generated offline and subsequently loaded into interactive applications like games. Since they are not updated every frame, they only provide a rigid surface representation. While there are methods to generate them quickly on GPU, the efficiency of these approaches is limited at high resolutions. This paper showcases a novel technique that combines jump flooding and ray tracing to generate approximate SDFs in real-time for soft shadow approximation, achieving prominent shadow penumbras while maintaining interactive frame rates.Item Simple Simulation of Curved Folds Based on Ruling-aware Triangulation(The Eurographics Association, 2020) Sasaki, Kosuke; Mitani, Jun; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardFolding a thin sheet material such as paper along curves creates a developable surface composed of ruled surface patches. When using such surfaces in design, designers often repeat a process of folding along curves drawn on a sheet and checking the folded shape. Although several methods for constructing such shapes on a computer have been proposed, it is still difficult to check the folded shapes instantly from the crease patterns.In this paper, we propose a simple method that approximately realizes a simulation of curved folds with a triangular mesh from its crease pattern. The proposed method first approximates curves in a crease pattern with polylines and then generates a triangular mesh. In order to construct the discretized developable surface, the edges in the mesh are rearranged so that they align with the estimated rulings. The proposed method is characterized by its simplicity and is implemented on an existing origami simulator that runs in a web browser.Item Stroke Synthesis for Inbetweening of Rough Line Animations(The Eurographics Association, 2020) Chen, Jiazhou; Zhu, Xinding; BĂ©nard, Pierre; Barla, Pascal; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardIn this paper, we present a stroke synthesis approach for the inbetweening of rough line animations. In pre-process, keyframe strokes are transformed by local perturbation and sliding to generate a number of candidate strokes, and adjacent keyframes are registered together. During inbetweening, candidate strokes are transferred to the intermediate frames and selected based on the desired spatial distribution and length constraints.Item Using Landmarks for Near-Optimal Pathfinding on the CPU and GPU(The Eurographics Association, 2020) Reischl, Maximilian; Knauer, Christian; Guthe, Michael; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardWe present a new approach for path finding in weighted graphs using pre-computed minimal distance fields. By selecting the most promising minimal distance field at any given node and switching between them, our algorithm tries to find the shortest path. As we show, this approach scales very well for different topologies, hardware and graph sizes and has a mean length error below 1% while using reasonable amounts of memory. By keeping a simple structure and minimal backtracking, we are able to use the same approach on the massively parallel GPU, reducing the run time even further.