Volume 43 (2024)
Permanent URI for this community
Browse
Browsing Volume 43 (2024) by Issue Date
Now showing 1 - 20 of 225
Results Per Page
Sort Options
Item Real-time Neural Rendering of Dynamic Light Fields(The Eurographics Association and John Wiley & Sons Ltd., 2024) Coomans, Arno; Dominici, Edoardo Alberto; Döring, Christian; Mueller, Joerg H.; Hladky, Jozef; Steinberger, Markus; Bermano, Amit H.; Kalogerakis, EvangelosSynthesising high-quality views of dynamic scenes via path tracing is prohibitively expensive. Although caching offline-quality global illumination in neural networks alleviates this issue, existing neural view synthesis methods are limited to mainly static scenes, have low inference performance or do not integrate well with existing rendering paradigms. We propose a novel neural method that is able to capture a dynamic light field, renders at real-time frame rates at 1920x1080 resolution and integrates seamlessly with Monte Carlo ray tracing frameworks. We demonstrate how a combination of spatial, temporal and a novel surface-space encoding are each effective at capturing different kinds of spatio-temporal signals. Together with a compact fully-fused neural network and architectural improvements, we achieve a twenty-fold increase in network inference speed compared to related methods at equal or better quality. Our approach is suitable for providing offline-quality real-time rendering in a variety of scenarios, such as free-viewpoint video, interactive multi-view rendering, or streaming rendering. Finally, our work can be integrated into other rendering paradigms, e.g., providing a dynamic background for interactive scenarios where the foreground is rendered with traditional methods.Item Stylize My Wrinkles: Bridging the Gap from Simulation to Reality(The Eurographics Association and John Wiley & Sons Ltd., 2024) Weiss, Sebastian; Stanhope, Jackson; Chandran, Prashanth; Zoss, Gaspard; Bradley, Derek; Bermano, Amit H.; Kalogerakis, EvangelosModeling realistic human skin with pores and wrinkles down to the milli- and micrometer resolution is a challenging task. Prior work showed that such micro geometry can be efficiently generated through simulation methods, or in specialized cases via 3D scanning of real skin. Simulation methods allow to highly customize the wrinkles on the face, but can lead to a synthetic look. Scanning methods can lead to a more organic look for the micro details, however these methods are only applicable to small skin patches due to the required image resolution. In this work we aim to overcome the gap between synthetic simulation and real skin scanning, by proposing a method that can be applied to large skin regions (e.g. an entire face) with the controllability of simulation and the organic look of real micro details. Our method is based on style transfer at its core, where we use scanned displacement maps of real skin patches as style images and displacement maps from an artist-friendly simulation method as content images. We build a library of displacement maps as style images by employing a simplified scanning setup that can capture high-resolution patches of real skin. To create the content component for the style transfer and to facilitate parameter-tuning for the simulation, we design a library of preset parameter values depicting different skin types, and present a new method to fit the simulation parameters to scanned skin patches. This allows fully-automatic parameter generation, interpolation and stylization across entire faces. We evaluate our method by generating realistic skin micro details for various subjects of different ages and genders, and demonstrate that our approach achieves a more organic and natural look than simulation alone.Item CharacterMixer: Rig-Aware Interpolation of 3D Characters(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhan, Xiao; Fu, Rao; Ritchie, Daniel; Bermano, Amit H.; Kalogerakis, EvangelosWe present CharacterMixer, a system for blending two rigged 3D characters with different mesh and skeleton topologies while maintaining a rig throughout interpolation. CharacterMixer also enables interpolation during motion for such characters, a novel feature. Interpolation is an important shape editing operation, but prior methods have limitations when applied to rigged characters: they either ignore the rig (making interpolated characters no longer posable) or use a fixed rig and mesh topology. To handle different mesh topologies, CharacterMixer uses a signed distance field (SDF) representation of character shapes, with one SDF per bone. To handle different skeleton topologies, it computes a hierarchical correspondence between source and target character skeletons and interpolates the SDFs of corresponding bones. This correspondence also allows the creation of a single ''unified skeleton'' for posing and animating interpolated characters. We show that CharacterMixer produces qualitatively better interpolation results than two state-of-the-art methods while preserving a rig throughout interpolation. Project page: https://seanxzhan.github.io/projects/CharacterMixer.Item Neural Histogram-Based Glint Rendering of Surfaces With Spatially Varying Roughness(The Eurographics Association and John Wiley & Sons Ltd., 2024) Shah, Ishaan; Gamboa, Luis E.; Gruson, Adrien; Narayanan, P. J.; Garces, Elena; Haines, EricThe complex, glinty appearance of detailed normal-mapped surfaces at different scales requires expensive per-pixel Normal Distribution Function computations. Moreover, large light sources further compound this integration and increase the noise in the Monte Carlo renderer. Specialized rendering techniques that explicitly express the underlying normal distribution have been developed to improve performance for glinty surfaces controlled by a fixed material roughness. We present a new method that supports spatially varying roughness based on a neural histogram that computes per-pixel NDFs with arbitrary positions and sizes. Our representation is both memory and compute efficient. Additionally, we fully integrate direct illumination for all light directions in constant time. Our approach decouples roughness and normal distribution, allowing the live editing of the spatially varying roughness of complex normal-mapped objects. We demonstrate that our approach improves on previous work by achieving smaller footprints while offering GPU-friendly computation and compact representation.Item Cut-Cell Microstructures for Two-scale Structural Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2024) Tozoni, Davi Colli; Huang, Zizhou; Panozzo, Daniele; Zorin, Denis; Hu, Ruizhen; Lefebvre, SylvainTwo-scale topology optimization, combined with the design of microstructure families with a broad range of effective material parameters, is widely used in many fabrication applications to achieve a target deformation behavior for a variety of objects. The main idea of this approach is to optimize the distribution of material properties in the object partitioned into relatively coarse cells, and then replace each cell with microstructure geometry that mimics these material properties. In this paper, we focus on adapting this approach to complex shapes in situations when preserving the shape's surface is essential. Our approach extends any regular (i.e. defined on a regular lattice grid) microstructure family to complex shapes, by enriching it with tiles adapted to the geometry of the cut-cell. We propose a fully automated and robust pipeline based on this approach, and we show that the performance of the regular microstructure family is only minimally affected by our extension while allowing its use on 2D and 3D shapes of high complexity.Item Strokes2Surface: Recovering Curve Networks From 4D Architectural Design Sketches(The Eurographics Association and John Wiley & Sons Ltd., 2024) Rasoulzadeh, Shervin; Wimmer, Michael; Stauss, Philipp; Kovacic, Iva; Bermano, Amit H.; Kalogerakis, EvangelosWe present Strokes2Surface, an offline geometry reconstruction pipeline that recovers well-connected curve networks from imprecise 4D sketches to bridge concept design and digital modeling stages in architectural design. The input to our pipeline consists of 3D strokes' polyline vertices and their timestamps as the 4th dimension, along with additional metadata recorded throughout sketching. Inspired by architectural sketching practices, our pipeline combines a classifier and two clustering models to achieve its goal. First, with a set of extracted hand-engineered features from the sketch, the classifier recognizes the type of individual strokes between those depicting boundaries (Shape strokes) and those depicting enclosed areas (Scribble strokes). Next, the two clustering models parse strokes of each type into distinct groups, each representing an individual edge or face of the intended architectural object. Curve networks are then formed through topology recovery of consolidated Shape clusters and surfaced using Scribble clusters guiding the cycle discovery. Our evaluation is threefold: We confirm the usability of the Strokes2Surface pipeline in architectural design use cases via a user study, we validate our choice of features via statistical analysis and ablation studies on our collected dataset, and we compare our outputs against a range of reconstructions computed using alternative methods.Item PossibleImpossibles: Exploratory Procedural Design of Impossible Structures(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Yuanbo; Ma, Tianyi; Aljumayaat, Zaineb; Ritchie, Daniel; Bermano, Amit H.; Kalogerakis, EvangelosWe present a method for generating structures in three-dimensional space that appear to be impossible when viewed from specific perspectives. Previous approaches focus on helping users to edit specific structures and require users to have knowledge of structural positioning causing the impossibility. On the contrary, our system is designed to aid users without prior knowledge to explore a wide range of potentially impossible structures. The essence of our method lies in features we call visual bridges that confuse viewers regarding the depth of the resulting structure. We use these features as starting points and employ procedural modeling to systematically generate the result. We propose scoring functions for enforcing desirable spatial arrangement of the result and use Sequential Monte Carlo to sample outputs that score well under these functions. We also present a proof-ofconcept user interface and demonstrate various results generated using our system.Item DivaTrack: Diverse Bodies and Motions from Acceleration-Enhanced 3-Point Trackers(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yang, Dongseok; Kang, Jiho; Ma, Lingni; Greer, Joseph; Ye, Yuting; Lee, Sung-Hee; Bermano, Amit H.; Kalogerakis, EvangelosFull-body avatar presence is important for immersive social and environmental interactions in digital reality. However, current devices only provide three six degrees of freedom (DOF) poses from the headset and two controllers (i.e. three-point trackers). Because it is a highly under-constrained problem, inferring full-body pose from these inputs is challenging, especially when supporting the full range of body proportions and use cases represented by the general population. In this paper, we propose a deep learning framework, DivaTrack, which outperforms existing methods when applied to diverse body sizes and activities. We augment the sparse three-point inputs with linear accelerations from Inertial Measurement Units (IMU) to improve foot contact prediction. We then condition the otherwise ambiguous lower-body pose with the predictions of foot contact and upper-body pose in a two-stage model. We further stabilize the inferred full-body pose in a wide range of configurations by learning to blend predictions that are computed in two reference frames, each of which is designed for different types of motions. We demonstrate the effectiveness of our design on a large dataset that captures 22 subjects performing challenging locomotion for three-point tracking, including lunges, hula-hooping, and sitting. As shown in a live demo using the Meta VR headset and Xsens IMUs, our method runs in real-time while accurately tracking a user's motion when they perform a diverse set of movements.Item Cinematographic Camera Diffusion Model(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jiang, Hongda; Wang, Xi; Christie, Marc; Liu, Libin; Chen, Baoquan; Bermano, Amit H.; Kalogerakis, EvangelosDesigning effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization-based solving, encoding of empirical rules, learning from real examples,...), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text-to-camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control.Item Enhancing Image Quality Prediction with Self-supervised Visual Masking(The Eurographics Association and John Wiley & Sons Ltd., 2024) Çogalan, Ugur; Bemana, Mojtaba; Seidel, Hans-Peter; Myszkowski, Karol; Bermano, Amit H.; Kalogerakis, EvangelosFull-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images, with the goal of accurately predicting human judgments. However, existing FR-IQMs, including traditional ones like PSNR and SSIM and even perceptual ones such as HDR-VDP, LPIPS, and DISTS, still fall short in capturing the complexities and nuances of human perception. In this work, rather than devising a novel IQM model, we seek to improve upon the perceptual quality of existing FR-IQM methods. We achieve this by considering visual masking, an important characteristic of the human visual system that changes its sensitivity to distortions as a function of local image content. Specifically, for a given FR-IQM metric, we propose to predict a visual masking model that modulates reference and distorted images in a way that penalizes the visual errors based on their visibility. Since the ground truth visual masks are difficult to obtain, we demonstrate how they can be derived in a self-supervised manner solely based on mean opinion scores (MOS) collected from an FR-IQM dataset. Our approach results in enhanced FR-IQM metrics that are more in line with human prediction both visually and quantitatively.Item OptFlowCam: A 3D-Image-Flow-Based Metric in Camera Space for Camera Paths in Scenes with Extreme Scale Variations(The Eurographics Association and John Wiley & Sons Ltd., 2024) Piotrowski, Lisa; Motejat, Michael; Rössl, Christian; Theisel, Holger; Bermano, Amit H.; Kalogerakis, EvangelosInterpolation between camera positions is a standard problem in computer graphics and can be considered the foundation of camera path planning. As the basis for a new interpolation method, we introduce a new Riemannian metric in camera space, which measures the 3D image flow under a small movement of the camera. Building on this, we define a linear interpolation between two cameras as shortest geodesic in camera space, for which we provide a closed-form solution after a mild simplification of the metric. Furthermore, we propose a geodesic Catmull-Rom interpolant for keyframe camera animation. We compare our approach with several standard camera interpolation methods and obtain consistently better camera paths especially for cameras with extremely varying scales.Item Hierarchical Co-generation of Parcels and Streets in Urban Modeling(The Eurographics Association and John Wiley & Sons Ltd., 2024) Chen, Zebin; Song, Peng; Ortner, F. Peter; Bermano, Amit H.; Kalogerakis, EvangelosWe present a computational framework for modeling land parcels and streets. In the real world, parcels and streets are highly coupled with each other since a street network connects all the parcels in a certain area. However, existing works model parcels and streets separately to simplify the problem, resulting in urban layouts with irregular parcels and/or suboptimal streets. In this paper, we propose a hierarchical approach to co-generate parcels and streets from a user-specified polygonal land shape, guided by a set of fundamental urban design requirements. At each hierarchical level, new parcels are generated based on binary splitting of existing parcels, and new streets are subsequently generated by leveraging efficient graph search tools to ensure that each new parcel has a street access. At the end, we optimize the geometry of the generated parcels and streets to further improve their geometric quality. Our computational framework outputs an urban layout with a desired number of regular parcels that are reachable via a connected street network, for which users are allowed to control the modeling process both locally and globally. Quantitative comparisons with state-of-the-art approaches show that our framework is able to generate parcels and streets that are superior in some aspects.Item Neural Denoising for Deep-Z Monte Carlo Renderings(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xianyao; Röthlin, Gerhard; Zhu, Shilin; Aydin, Tunç Ozan; Salehi, Farnood; Gross, Markus; Papas, Marios; Bermano, Amit H.; Kalogerakis, EvangelosWe present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions.Item Issue Information(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024)Item Auxiliary Features‐Guided Super Resolution for Monte Carlo Rendering(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hou, Qiqi; Liu, Feng; Alliez, Pierre; Wimmer, MichaelThis paper investigates super‐resolution to reduce the number of pixels to render and thus speed up Monte Carlo rendering algorithms. While great progress has been made to super‐resolution technologies, it is essentially an ill‐posed problem and cannot recover high‐frequency details in renderings. To address this problem, we exploit high‐resolution auxiliary features to guide super‐resolution of low‐resolution renderings. These high‐resolution auxiliary features can be quickly rendered by a rendering engine and at the same time provide valuable high‐frequency details to assist super‐resolution. To this end, we develop a cross‐modality transformer network that consists of an auxiliary feature branch and a low‐resolution rendering branch. These two branches are designed to fuse high‐resolution auxiliary features with the corresponding low‐resolution rendering. Furthermore, we design Residual Densely Connected Swin Transformer groups to learn to extract representative features to enable high‐quality super‐resolution. Our experiments show that our auxiliary features‐guided super‐resolution method outperforms both super‐resolution methods and Monte Carlo denoising methods in producing high‐quality renderings.Item Identifying and Visualizing Terrestrial Magnetospheric Topology using Geodesic Level Set Method(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Xiong, Peikun; Fujita, Shigeru; Watanabe, Masakazu; Tanaka, Takashi; Cai, Dongsheng; Alliez, Pierre; Wimmer, MichaelThis study introduces a novel numerical method for identifying and visualizing the terrestrial magnetic field topology in a large‐scale three‐dimensional global MHD (Magneto‐Hydro‐Dynamic) simulation. The (un)stable two‐dimensional manifolds are generated from critical points (CPs) located north and south of the magnetosphere using an improved geodesic level set method. A boundary value problem is solved numerically using a shooting method to forward a new geodesic level set from the previous set. These sets are generated starting from a small circle whose centre is a CP. The level sets are the sets of mesh points that form the magnetic manifold, which determines the magnetic field topology. In this study, a consistent method is proposed to determine the magnetospheric topology. Using this scheme, we successfully visualize a terrestrial magnetospheric field topology and identify its two neutral lines using the global MHD simulation. Our results present a terrestrial topology that agrees well with the recent magnetospheric physics and can help us understand various nonlinear magnetospheric dynamics and phenomena. Our visualization enables us to fill the gaps between current magnetospheric physics that can be observed via satellites and nonlinear dynamics, particularly, the bifurcation theory, in the future.Item State of the Art in Efficient Translucent Material Rendering with BSSRDF(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liang, Shiyu; Gao, Yang; Hu, Chonghao; Zhou, Peng; Hao, Aimin; Wang, Lili; Qin, Hong; Alliez, Pierre; Wimmer, MichaelSub‐surface scattering is always an important feature in translucent material rendering. When light travels through optically thick media, its transport within the medium can be approximated using diffusion theory, and is appropriately described by the bidirectional scattering‐surface reflectance distribution function (BSSRDF). BSSRDF methods rely on assumptions about object geometry and light distribution in the medium, which limits their applicability to general participating media problems. However, despite the high computational cost of path tracing, BSSRDF methods are often favoured due to their suitability for real‐time applications. We review these methods and discuss the most recent breakthroughs in this field. We begin by summarizing various BSSRDF models and then implement most of them in a 2D searchlight problem to demonstrate their differences. We focus on acceleration methods using BSSRDF, which we categorize into two primary groups: pre‐computation and texture methods. Then we go through some related topics, including applications and advanced areas where BSSRDF is used, as well as problems that are sometimes important yet are ignored in sub‐surface scattering estimation. In the end of this survey, we point out remaining constraints and challenges, which may motivate future work to facilitate sub‐surface scattering.Item Guided Exploration of Industrial Sensor Data(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Langer, Tristan; Meyes, Richard; Meisen, Tobias; Alliez, Pierre; Wimmer, MichaelIn recent years, digitization in the industrial sector has increased steadily. Digital data not only allows us to monitor the underlying production process using machine learning methods (anomaly detection, behaviour analysis) but also to understand the underlying production process. Insights from Exploratory Data Analysis (EDA) play an important role in building data‐driven processes because data scientists learn essential characteristics of the data in the context of the domain. Due to the complexity of production processes, it is usually difficult for data scientists to acquire this knowledge by themselves. Hence, they have to rely on continuous close collaboration with domain experts and their acquired domain expertise. However, direct communication does not promote documentation of the knowledge transfer from domain experts to data scientists. In this respect, changing team constellations, for example due to a change in personnel, result in a renewed high level of effort despite the same knowledge transfer problem. As a result, EDA is a cost‐intensive iterative process. We, therefore, investigate a system to extract information from the interactions that domain experts perform during EDA. Our approach relies on recording interactions and system states of an exploration tool and generating guided exploration sessions for domain novices. We implement our approach in a software tool and demonstrate its capabilities using two real‐world use cases from the manufacturing industry. We evaluate its feasibility in a user study to investigate whether domain novices can reproduce the most important insights from domain experts about the datasets of the use cases based on generated EDA sessions. From the results of this study, we conclude the feasibility of our system as participants are able to reproduce on average 86.5% of insights from domain experts.Item Formation‐Aware Planning and Navigation with Corridor Shortest Path Maps(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Sharma, Ritesh; Weiss, Tomer; Kallmann, Marcelo; Alliez, Pierre; Wimmer, MichaelThe need to plan motions for agents with variable shape constraints such as under different formations appears in several virtual and real‐world applications of autonomous agents. In this work, we focus on planning and execution of formation‐aware paths for a group of agents traversing a cluttered environment. The proposed planning framework addresses the trade‐off between being able to enforce a preferable formation when traversing the corridors of the environment, versus accepting to switch to alternative formations requiring less clearance in order to utilize narrower corridors that can lead to a shorter overall path to the final destination. At the planning stage, this trade‐off is addressed with a multi‐layer graph annotated with per‐layer navigation costs and formation transition costs, where each layer represents one formation together with its specific clearance requirement. At the navigation stage, we introduce Corridor Shortest Path Maps (CSPMs), which produce a vector field for guiding agents along the solution corridor, ensuring unobstructed in‐formation navigation in cluttered environments, as well as group motion along lengthwise‐optimal paths in the solution corridor. We also present examples of how our multi‐layer planning framework can be applied to other types of multi‐modal planning problems.Item Editorial(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Alliez, Pierre; Wimmer, Michael