PG2021 Short Papers, Posters, and Work-in-Progress Papers

Permanent URI for this collection

Pacific Graphics 2021 - Short Papers, Posters, and Work-in-Progress Papers
Wellington, New Zealand

(for Full Papers (CGF) see PG 2021 - CGF 40-7)
Fast Rendering and Movement
Fast and Lightweight Path Guiding Algorithm on GPU
Juhyeon Kim and Young Min Kim
Real-Time Antialiased Area Lighting Using Multi-Scale Linearly Transformed Cosines
Chengzhi Tao, Jie Guo, Chen Gong, Beibei Wang, and Yanwen Guo
CSLF: Cube Surface Light Field and Its Sampling, Compression, Real-Time Rendering
Xiaofei Ai, Yigang Wang, and Simin Kou
Maximum-Clearance Planar Motion Planning Based on Recent Developments in Computing Minkowski Sums and Voronoi Diagrams
Mingyu Jung and Myung-Soo Kim
Human Motion Synthesis and Control via Contextual Manifold Embedding
Rui Zeng, Ju Dai, Junxuan Bai, Junjun Pan, and Hong Qin
Neural Rendering and 3D Models
Neural Proxy: Empowering Neural Volume Rendering for Animation
Zackary P. T. Sin, Peter H. F. Ng, and Hong Va Leong
Neural Screen Space Rendering of Direct Illumination
Christian Suppan, Andrew Chalmers, Junhong Zhao, Alex Doronin, and Taehyun Rhee
Art-directing Appearance using an Environment Map Latent Space
Lohit Petikam, Andrew Chalmers, Ken Anjyo, and Taehyun Rhee
3D-CariNet: End-to-end 3D Caricature Generation from Natural Face Images with Differentiable Renderer
Meijia Huang, Ju Dai, Junjun Pan, Junxuan Bai, and Hong Qin
SM-NET: Reconstructing 3D Structured Mesh Models from Single Real-World Image
Yue Yu, Ying Li, Jing-Yu Zhang, and Yue Yang
Works-In-Progress and Posters
Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications
Tan Yu Wei, Louiz Kim-Chan, Anthony Halim, and Anand Bhojan
View-Dependent Impostors for Architectural Shape Grammars
Chao Jia, Moritz Roth, Bernhard Kerbl, and Michael Wimmer
Temporally Stable Content-Adaptive and Spatio-Temporal Shading Rate Assignment for Real-Time Applications
Stefan Stappen, Johannes Unterguggenberger, Bernhard Kerbl, and Michael Wimmer
Peripheral Vision in Simulated Driving: Comparing CAVE and Head-mounted Display
Tana Tanoi and Neil A. Dodgson
SDALIE-GAN: Structure and Detail Aware GAN for Low-light Image Enhancement
Youxin Pang, Mengke Yuan, Yuchun Chang, and Dong-Ming Yan
User-centred Depth Estimation Benchmarking for VR Content Creation from Single Images
Anthony Dickson, Alistair Knott, and Stefanie Zollmann
Volumetric Video Streaming Data Reduction Method Using Front-mesh 3D Data
Xiaotian Zhao and Takafumi Okuyama
Image Processing and Synthesis
Constraint Synthesis for Parametric CAD
Aman Mathur and Damien Zufferey
Hierarchical Link and Code: Efficient Similarity Search for Billion-Scale Image Sets
Kaixiang Yang, Hongya Wang, Ming Du, Zhizheng Wang, Zongyuan Tan, and Yingyuan Xiao
Real-time Content Projection onto a Tunnel from a Moving Subway Train
Jaedong Kim, Haegwang Eom, Jihwan Kim, Younghui Kim, and Junyong Noh
GANST: Gradient-aware Arbitrary Neural Style Transfer
Haichao Zhu

BibTeX (PG2021 Short Papers, Posters, and Work-in-Progress Papers)
@inproceedings{
10.2312:pg.20211379,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Fast and Lightweight Path Guiding Algorithm on GPU}},
author = {
Kim, Juhyeon
and
Kim, Young Min
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211379}
}
@inproceedings{
10.2312:pg.20211381,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
CSLF: Cube Surface Light Field and Its Sampling, Compression, Real-Time Rendering}},
author = {
Ai, Xiaofei
and
Wang, Yigang
and
Kou, Simin
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211381}
}
@inproceedings{
10.2312:pg.20211380,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Real-Time Antialiased Area Lighting Using Multi-Scale Linearly Transformed Cosines}},
author = {
Tao, Chengzhi
and
Guo, Jie
and
Gong, Chen
and
Wang, Beibei
and
Guo, Yanwen
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211380}
}
@inproceedings{
10.2312:pg.20211382,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Maximum-Clearance Planar Motion Planning Based on Recent Developments in Computing Minkowski Sums and Voronoi Diagrams}},
author = {
Jung, Mingyu
and
Kim, Myung-Soo
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211382}
}
@inproceedings{
10.2312:pg.20211383,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Human Motion Synthesis and Control via Contextual Manifold Embedding}},
author = {
Zeng, Rui
and
Dai, Ju
and
Bai, Junxuan
and
Pan, Junjun
and
Qin, Hong
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211383}
}
@inproceedings{
10.2312:pg.20211384,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Neural Proxy: Empowering Neural Volume Rendering for Animation}},
author = {
Sin, Zackary P. T.
and
Ng, Peter H. F.
and
Leong, Hong Va
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211384}
}
@inproceedings{
10.2312:pg.20211385,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Neural Screen Space Rendering of Direct Illumination}},
author = {
Suppan, Christian
and
Chalmers, Andrew
and
Zhao, Junhong
and
Doronin, Alex
and
Rhee, Taehyun
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211385}
}
@inproceedings{
10.2312:pg.20211386,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Art-directing Appearance using an Environment Map Latent Space}},
author = {
Petikam, Lohit
and
Chalmers, Andrew
and
Anjyo, Ken
and
Rhee, Taehyun
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211386}
}
@inproceedings{
10.2312:pg.20211387,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
3D-CariNet: End-to-end 3D Caricature Generation from Natural Face Images with Differentiable Renderer}},
author = {
Huang, Meijia
and
Dai, Ju
and
Pan, Junjun
and
Bai, Junxuan
and
Qin, Hong
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211387}
}
@inproceedings{
10.2312:pg.20211388,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
SM-NET: Reconstructing 3D Structured Mesh Models from Single Real-World Image}},
author = {
Yu, Yue
and
Li, Ying
and
Zhang, Jing-Yu
and
Yang, Yue
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211388}
}
@inproceedings{
10.2312:pg.20211389,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications}},
author = {
Tan, Yu Wei
and
Kim-Chan, Louiz
and
Halim, Anthony
and
Bhojan, Anand
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211389}
}
@inproceedings{
10.2312:pg.20211390,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
View-Dependent Impostors for Architectural Shape Grammars}},
author = {
Jia, Chao
and
Roth, Moritz
and
Kerbl, Bernhard
and
Wimmer, Michael
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211390}
}
@inproceedings{
10.2312:pg.20211391,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Temporally Stable Content-Adaptive and Spatio-Temporal Shading Rate Assignment for Real-Time Applications}},
author = {
Stappen, Stefan
and
Unterguggenberger, Johannes
and
Kerbl, Bernhard
and
Wimmer, Michael
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211391}
}
@inproceedings{
10.2312:pg.20211393,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
SDALIE-GAN: Structure and Detail Aware GAN for Low-light Image Enhancement}},
author = {
Pang, Youxin
and
Yuan, Mengke
and
Chang, Yuchun
and
Yan, Dong-Ming
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211393}
}
@inproceedings{
10.2312:pg.20211392,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Peripheral Vision in Simulated Driving: Comparing CAVE and Head-mounted Display}},
author = {
Tanoi, Tana
and
Dodgson, Neil A.
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211392}
}
@inproceedings{
10.2312:pg.20211394,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
User-centred Depth Estimation Benchmarking for VR Content Creation from Single Images}},
author = {
Dickson, Anthony
and
Knott, Alistair
and
Zollmann, Stefanie
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211394}
}
@inproceedings{
10.2312:pg.20211395,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Volumetric Video Streaming Data Reduction Method Using Front-mesh 3D Data}},
author = {
Zhao, Xiaotian
and
Okuyama, Takafumi
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211395}
}
@inproceedings{
10.2312:pg.20211396,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Constraint Synthesis for Parametric CAD}},
author = {
Mathur, Aman
and
Zufferey, Damien
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211396}
}
@inproceedings{
10.2312:pg.20211398,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Real-time Content Projection onto a Tunnel from a Moving Subway Train}},
author = {
Kim, Jaedong
and
Eom, Haegwang
and
Kim, Jihwan
and
Kim, Younghui
and
Noh, Junyong
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211398}
}
@inproceedings{
10.2312:pg.20211397,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
Hierarchical Link and Code: Efficient Similarity Search for Billion-Scale Image Sets}},
author = {
Yang, Kaixiang
and
Wang, Hongya
and
Du, Ming
and
Wang, Zhizheng
and
Tan, Zongyuan
and
Xiao, Yingyuan
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211397}
}
@inproceedings{
10.2312:pg.20211399,
booktitle = {
Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers},
editor = {
Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
}, title = {{
GANST: Gradient-aware Arbitrary Neural Style Transfer}},
author = {
Zhu, Haichao
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-162-5},
DOI = {
10.2312/pg.20211399}
}

Browse

Recent Submissions

Now showing 1 - 22 of 22
  • Item
    Pacific Graphics 2021 - Short Papers, Posters, and Work-in-Progress Papers: Frontmatter
    (The Eurographics Association, 2021) Lee, Sung-Hee; Zollmann, Stefanie; Okabe, Makoto; Wünsche, Burkhard; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
  • Item
    Fast and Lightweight Path Guiding Algorithm on GPU
    (The Eurographics Association, 2021) Kim, Juhyeon; Kim, Young Min; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    We propose a simple, yet practical path guiding algorithm that runs on GPU. Path guiding renders photo-realistic images by simulating the iterative bounces of rays, which are sampled from the radiance distribution. The radiance distribution is often learned by serially updating the hierarchical data structure to represent complex scene geometry, which is not easily implemented with GPU. In contrast, we employ a regular data structure and allow fast updates by processing a significant number of rays with GPU. We further increase the efficiency of radiance learning by employing SARSA [SB18] used in reinforcement learning. SARSA does not include aggregation of incident radiance from all directions nor storing all of the previous paths. The learned distribution is then sampled with an optimized rejection sampling, which adapts the current surface normal to reflect finer geometry than the grid resolution. All of the algorithms have been implemented on GPU using megakernal architecture with NVIDIA OptiX [PBD*10]. Through numerous experiments on complex scenes, we demonstrate that our proposed path guiding algorithm works efficiently on GPU, drastically reducing the number of wasted paths.
  • Item
    CSLF: Cube Surface Light Field and Its Sampling, Compression, Real-Time Rendering
    (The Eurographics Association, 2021) Ai, Xiaofei; Wang, Yigang; Kou, Simin; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Light field is gaining both research and commercial interests since it has the potential to produce view-dependent and photorealistic effects for virtual and augmented reality. In this paper, we further explore the light field and presents a novel parameterization that permits 1) effectively sampling the light field of an object with unknown geometry, 2) efficiently compressing and 3) real-time rendering from arbitrary viewpoints. A novel, key element in our parameterization is that we use the intersections of the light rays and a general cube surface to parameterize the four-dimensional light field, constructing the cube surface light field (CSLF). We resolve the huge data amount problem in CSLF by uniformly decimating the viewpoint space to form a set of key views which are then converted into a pseudo video sequence and compressed using the high efficiency video coding encoder. To render the CSLF, we employ a ray casting approach and draw a polygonal mesh, enabling real-time generating arbitrary views from the outside of the cube surface. We build the CSLF datasets and extensively evaluate our parameterization from the sampling, compression and rendering. Results show that the cube surface parameterization can simultaneously achieve the above three characteristics, indicating the potentiality in practical virtual and augmented reality.
  • Item
    Real-Time Antialiased Area Lighting Using Multi-Scale Linearly Transformed Cosines
    (The Eurographics Association, 2021) Tao, Chengzhi; Guo, Jie; Gong, Chen; Wang, Beibei; Guo, Yanwen; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    We present an anti-aliased real-time rendering method for local area lights based on Linearly Transformed Cosines (LTCs). It significantly reduces the aliasing artifacts of highlights reflected from area lights due to ignoring the meso-scale roughness (induced by normal maps). The proposed method separates the surface roughness into different scales and represents them all by LTCs. Then, spherical convolution is conducted between them to derive the overall normal distribution and the final Bidirectional Reflectance Distribution Function (BRDF). The overall surface roughness is further approximated by a polynomial function to guarantee high efficiency and avoid additional storage consumption. Experimental results show that our approach produces convincing results of multi-scale roughness across a range of viewing distances for local area lighting.
  • Item
    Maximum-Clearance Planar Motion Planning Based on Recent Developments in Computing Minkowski Sums and Voronoi Diagrams
    (The Eurographics Association, 2021) Jung, Mingyu; Kim, Myung-Soo; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    We present a maximum-clearance motion planning algorithm for planar geometric models with three degrees of freedom (translation and rotation). This work is based on recent developments in real-time algorithms for computing the Minkowski sums and Voronoi diagrams of planar geometric models bounded by G1-continuous sequences of circular arcs. Compared with their counterparts using polygons with no G1-continuity at vertices, the circle-based approach greatly simplifies the Voronoi structure of the collision-free space for the motion planning in a plane with three degrees of freedom. We demonstrate the effectiveness of the proposed approach by test sets of maximum-clearance motion planning through narrow passages in a plane.
  • Item
    Human Motion Synthesis and Control via Contextual Manifold Embedding
    (The Eurographics Association, 2021) Zeng, Rui; Dai, Ju; Bai, Junxuan; Pan, Junjun; Qin, Hong; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Modeling motion dynamics for precise and rapid control by deterministic data-driven models is challenging due to the natural randomness of human motion. To address it, we propose a novel framework for continuous motion control by probabilistic latent variable models. The control is implemented by recurrently querying between historical and target motion states rather than exact motion data. Our model takes a conditional encoder-decoder form in two stages. Firstly, we utilize Gaussian Process Latent Variable Model (GPLVM) to project motion poses to a compact latent manifold. Motion states could be clearly recognized by analyzing on the manifold, such as walking phase and forwarding velocity. Secondly, taking manifold as prior, a Recurrent Neural Network (RNN) encoder makes temporal latent prediction from the previous and control states. An attention module then morphs the prediction by measuring latent similarities to control states and predicted states, thus dynamically preserving contextual consistency. In the end, the GP decoder reconstructs motion states back to motion frames. Experiments on walking datasets show that our model is able to maintain motion states autoregressively while performing rapid and smooth transitions for the control.
  • Item
    Neural Proxy: Empowering Neural Volume Rendering for Animation
    (The Eurographics Association, 2021) Sin, Zackary P. T.; Ng, Peter H. F.; Leong, Hong Va; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Achieving photo-realistic result is an enticing proposition for the computer graphics community. Great progress has been achieved in the past decades, but the cost of human expertise has also grown. Neural rendering is a promising candidate for reducing this cost as it relies on data to construct the scene representation. However, one key component for adapting neural rendering for practical use is currently missing: animation. There seems to be a lack of discussion on how to enable neural rendering works for synthesizing frames for unseen animations. To fill this research gap, we propose neural proxy, a novel neural rendering model that utilizes animatable proxies for representing photo-realistic targets. Via a tactful combination of components from neural volume rendering and neural texture, our model is able to render unseen animations without any temporal learning. Experiment results show that the proposed model significantly outperforms current neural rendering works.
  • Item
    Neural Screen Space Rendering of Direct Illumination
    (The Eurographics Association, 2021) Suppan, Christian; Chalmers, Andrew; Zhao, Junhong; Doronin, Alex; Rhee, Taehyun; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Neural rendering is a class of methods that use deep learning to produce novel images of scenes from more limited information than traditional rendering methods. This is useful for information scarce applications like mixed reality or semantic photo synthesis but comes at the cost of control over the final appearance. We introduce the Neural Direct-illumination Renderer (NDR), a neural screen space renderer capable of rendering direct-illumination images of any geometry, with opaque materials, under distant illuminant. The NDR uses screen space buffers describing material, geometry, and illumination as inputs to provide direct control over the output. We introduce the use of intrinsic image decomposition to allow a Convolutional Neural Network (CNN) to learn a mapping from a large number of pixel buffers to rendered images. The NDR predicts shading maps, which are subsequently combined with albedo maps to create a rendered image. We show that the NDR produces plausible images that can be edited by modifying the input maps and marginally outperforms the state of the art while also providing more functionality.
  • Item
    Art-directing Appearance using an Environment Map Latent Space
    (The Eurographics Association, 2021) Petikam, Lohit; Chalmers, Andrew; Anjyo, Ken; Rhee, Taehyun; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    In look development, environment maps (EMs) are used to verify 3D appearance in varied lighting (e.g., overcast, sunny, and indoor). Artists can only assign one fixed material, making it laborious to edit appearance uniquely for all EMs. Artists can artdirect material and lighting in film post-production. However, this is impossible in dynamic real-time games and live augmented reality (AR), where environment lighting is unpredictable. We present a new workflow to customize appearance variation across a wide range of EM lighting, for live applications. Appearance edits can be predefined, and then automatically adapted to environment lighting changes. We achieve this by learning a novel 2D latent space of varied EM lighting. The latent space lets artists browse EMs in a semantically meaningful 2D view. For different EMs, artists can paint different material and lighting parameter values directly on the latent space. We robustly encode new EMs into the same space, for automatic look-up of the desired appearance. This solves a new problem of preserving art-direction in live applications, without any artist intervention.
  • Item
    3D-CariNet: End-to-end 3D Caricature Generation from Natural Face Images with Differentiable Renderer
    (The Eurographics Association, 2021) Huang, Meijia; Dai, Ju; Pan, Junjun; Bai, Junxuan; Qin, Hong; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Caricatures are an artistic representation of human faces to express satire and humor. Caricature generation of human faces is a hotspot in CG research. Previous work mainly focuses on 2D caricatures generation from face photos or 3D caricature reconstruction from caricature images. In this paper, we propose a novel end-to-end method to directly generate personalized 3D caricatures from a single natural face image. It can create not only exaggerated geometric shapes, but also heterogeneous texture styles. Firstly, we construct a synthetic dataset containing matched data pairs composed of face photos, caricature images, and 3D caricatures. Then, we design a graph convolutional autoencoder to build a non-linear colored mesh model to learn the shape and texture of 3D caricatures. To make the network end-to-end trainable, we incorporate a differentiable renderer to render 3D caricatures into caricature images inversely. Experiments demonstrate that our method can achieve 3D caricature generation with various texture styles from face images while maintaining personality characteristics.
  • Item
    SM-NET: Reconstructing 3D Structured Mesh Models from Single Real-World Image
    (The Eurographics Association, 2021) Yu, Yue; Li, Ying; Zhang, Jing-Yu; Yang, Yue; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Image-based 3D structured model reconstruction enables the network to learn the missing information between the dimensions and understand the structure of the 3D model. In this paper, SM-NET is proposed in order to reconstruct 3D structured mesh model based on single real-world image. First, it considers the model as a sequence of parts and designs a shape autoencoder to autoencode 3D model. Second, the network extracts 2.5D information from the real-world image and maps it to the latent space of the shape autoencoder. Finally, both are connected to complete the reconstruction task. Besides, a more reasonable 3D structured model dataset is built to enhance the effect of reconstruction. The experimental results show that we achieve the reconstruction of 3D structured mesh model based on single real-world image, outperforming other approaches.
  • Item
    Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications
    (The Eurographics Association, 2021) Tan, Yu Wei; Kim-Chan, Louiz; Halim, Anthony; Bhojan, Anand; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    We introduce a novel distributed rendering approach to generate high-quality graphics in thin-client games and VR applications. Many mobile devices have limited computational power to achieve ray tracing in real-time. Hence, hardware-accelerated cloud servers can perform ray tracing instead and have their output streamed to clients in remote rendering. Applying the approach of distributed hybrid rendering, we leverage the computational capabilities of both the thin client and powerful server by performing rasterization locally while offloading ray tracing to the server. With advancements in 5G technology, the server and client can communicate effectively over the network and work together to produce a high-quality output while maintaining interactive frame rates. Our approach can achieve better visuals as compared to local rendering but faster performance as compared to remote rendering.
  • Item
    View-Dependent Impostors for Architectural Shape Grammars
    (The Eurographics Association, 2021) Jia, Chao; Roth, Moritz; Kerbl, Bernhard; Wimmer, Michael; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Procedural generation has become a key component in satisfying a growing demand for ever-larger, highly detailed geometry in realistic, open-world games and simulations. In this paper, we present our work towards a new level-of-detail mechanism for procedural geometry shape grammars. Our approach automatically identifies and adds suitable surrogate rules to a shape grammar's derivation tree. Opportunities for surrogates are detected in a dedicated pre-processing stage. Where suitable, textured impostors are then used for rendering based on the current viewpoint at runtime. Our proposed methods generate simplified geometry with superior visual quality to the state-of-the-art and roughly the same rendering performance.
  • Item
    Temporally Stable Content-Adaptive and Spatio-Temporal Shading Rate Assignment for Real-Time Applications
    (The Eurographics Association, 2021) Stappen, Stefan; Unterguggenberger, Johannes; Kerbl, Bernhard; Wimmer, Michael; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    We propose two novel methods to improve the efficiency and quality of real-time rendering applications: Texel differential-based content-adaptive shading (TDCAS) and spatio-temporally filtered adaptive shading (STeFAS). Utilizing Variable Rate Shading (VRS)-a hardware feature introduced with NVIDIA's Turing micro-architecture-and properties derived during rendering or Temporal Anti-Aliasing (TAA), our techniques adapt the resolution to improve the performance and quality of real-time applications. VRS enables different shading resolution for different regions of the screen during a single render pass. In contrast to other techniques, TDCAS and STeFAS have very little overhead for computing the shading rate. STeFAS enables up to 4x higher rendering resolutions for similar frame rates, or a performance increase of 4× at the same resolution.
  • Item
    SDALIE-GAN: Structure and Detail Aware GAN for Low-light Image Enhancement
    (The Eurographics Association, 2021) Pang, Youxin; Yuan, Mengke; Chang, Yuchun; Yan, Dong-Ming; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    We present a GAN-based network architecture for low-light image enhancement, called Structure and Detail Aware Low-light Image Enhancement GAN (SDALIE-GAN), which is trained with unpaired low/normal-light images. Specifically, complementary Structure Aware Generator (SAG) and Detail Aware Generator (DAG) are designed respectively to generate an enhanced low-light image. Besides, intermediate features from SAG and DAG are integrated through guided map supervised feature attention fusion module, and regularizes the generated samples with an appended intensity adjusting module. We demonstrate the advantages of the proposed approach by comparing it with state-of-the-art low-light image enhancement methods.
  • Item
    Peripheral Vision in Simulated Driving: Comparing CAVE and Head-mounted Display
    (The Eurographics Association, 2021) Tanoi, Tana; Dodgson, Neil A.; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Peripheral vision is widely thought to be important but is not provided in the majority of head-mounted displays (HMD). We investigate whether peripheral vision is important in a simulated driving task. Our hypothesis is that subjects will be able to complete the task more quickly if they use their peripheral vision. We compared subject performance in a CAVE environment, with 270° field-of-view (so automatic peripheral vision) and in a HMD, with 110° field-of-view (so no peripheral vision but the ability to turn the head). Our results show almost no statistically significant differences between the two conditions. This contrasts with the opinions of our subjects: our expert users, in early tests, commented that peripheral vision helped in the task and the majority of our naïve subjects believed that the lack of peripheral vision in the HMD hindered them in the task.
  • Item
    User-centred Depth Estimation Benchmarking for VR Content Creation from Single Images
    (The Eurographics Association, 2021) Dickson, Anthony; Knott, Alistair; Zollmann, Stefanie; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    The capture and creation of 3D content from a device equipped with just a single RGB camera has a wide range of applications ranging from 3D photographs and panoramas to 3D video. Many of these methods rely on depth estimation models to provide the necessary 3D data, mainly neural network models. However, the metrics used to evaluate these models can be difficult to interpret and to relate to the quality of 3D/VR content derived from these models. In this work, we explore the relationship between the widely used depth estimation metrics, image similarly metrics applied to synthesised novel viewpoints, and user perception of quality and similarity on these novel viewpoints. Our results indicate that the standard metrics are indeed a good indicator of 3D quality, and that they correlate with human judgements and other metrics that are designed to follow human judgements.
  • Item
    Volumetric Video Streaming Data Reduction Method Using Front-mesh 3D Data
    (The Eurographics Association, 2021) Zhao, Xiaotian; Okuyama, Takafumi; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Volumetric video contents are attracting much attention across various industries for their six-degrees-of-freedom (6DoF) viewing experience. However, in terms of streaming, volumetric video contents still present challenges such as high data volume and bandwidth consumption, which results in high stress on the network. To solve this issue, we propose a method using frontmesh 3D data to reduce the data size without affecting the visual quality much from a user's perspective. The proposed method also reduces decoding and import time on the client side, which enables faster playback of 3D data. We evaluated our method in terms of data reduction and computation complexity and conducted a qualitative analysis by comparing rendering results with reference data at different diagonal angles. Our method successfully reduces data volume and computation complexity with minimal visual quality loss.
  • Item
    Constraint Synthesis for Parametric CAD
    (The Eurographics Association, 2021) Mathur, Aman; Zufferey, Damien; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Parametric CAD, in conjunction with 3D-printing, is democratizing design and production pipelines. End-users can easily change parameters of publicly available designs, and 3D-print the customized objects. In research and industry, parametric designs are being used to find optimal, or unique final objects. Unfortunately, for most designs, many combinations of parameter values are invalid. Restricting the parameter space of designs to only the valid configurations is a difficult problem. Most publicly available designs do not contain this information. Using ideas from program analysis, we synthesize constraints on parameters of parametric designs. Some constraints are synthesized statically, by exploiting implicit assumptions of the design process. Several others are inferred by evaluating the design on many different samples, and then constructing and solving hypotheses. Our approach is effective at finding constraints on parameter values for a wide variety of parametric designs, with a very small runtime cost, in the order of seconds.
  • Item
    Real-time Content Projection onto a Tunnel from a Moving Subway Train
    (The Eurographics Association, 2021) Kim, Jaedong; Eom, Haegwang; Kim, Jihwan; Kim, Younghui; Noh, Junyong; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    In this study, we present the first actual working system that can project content onto a tunnel wall from a moving subway train so that passengers can enjoy the display of digital content through a train window. To effectively estimate the position of the train in a tunnel, we propose counting sleepers, which are installed at regular interval along the railway, using a distance sensor. The tunnel profile is constructed using pointclouds captured by a depth camera installed next to the projector. The tunnel profile is used to identify projectable sections that will not contain too much interference by possible occluders. The tunnel profile is also used to retrieve the depth at a specific location so that a properly warped content can be projected for viewing by passengers through the window when the train is moving at runtime. Here, we show that the proposed system can operate on an actual train.
  • Item
    Hierarchical Link and Code: Efficient Similarity Search for Billion-Scale Image Sets
    (The Eurographics Association, 2021) Yang, Kaixiang; Wang, Hongya; Du, Ming; Wang, Zhizheng; Tan, Zongyuan; Xiao, Yingyuan; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Similarity search is an indispensable component in many computer vision applications. To index billions of images on a single commodity server, Douze et al. introduced L&C that works on operating points considering 64-128 bytes per vector. While the idea is inspiring, we observe that L&C still suffers the accuracy saturation problem, which it is aimed to solve. To this end, we propose a simple yet effective two-layer graph index structure, together with dual residual encoding, to attain higher accuracy. Particularly, we partition vectors into multiple clusters and build the top-layer graph using the corresponding centroids. For each cluster, a subgraph is created with compact codes of the first-level vector residuals. Such an index structure provides better graph search precision as well as saves quite a few bytes for compression. We employ the second-level residual quantization to re-rank the candidates obtained through graph traversal, which is more efficient than regression-from-neighbors adopted by L&C. Comprehensive experiments show that our proposal obtains over 30% higher recall@1 than the state-of-thearts, and achieves up to 7.7x and 6.1x speedup over L&C on Deep1B and Sift1B, respectively.
  • Item
    GANST: Gradient-aware Arbitrary Neural Style Transfer
    (The Eurographics Association, 2021) Zhu, Haichao; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    Artistic style transfer synthesizes a stylized image with content from a target image and style from an art image. The latest neural style transfer leverages texture distributions as style information, and applies the style to content images afterwards. These methods are promising; however, they could introduce semantic content loss into synthesized results inevitably with the disregarded gradient information of input images. To tackle this problem, we propose a novel gradient-aware technique, called GANST. First, GANST decomposes input images to intermediate steerable representation that capture gradient information at multiple scales based on a Steerable Pyramid Neural Network (SPNN). With the extracted information, GANST preserves semantic content by integrating a novel loss representation of local gradients to AdaIN architecture, which we call Steerable Style Transfer Network (SSTN). Experimental results on various images demonstrate that our proposed GANST outperforms the state-of-the-art methods in producing results with concrete style reflected and detailed content preserved.