43-Issue 1
Permanent URI for this collection
Browse
Browsing 43-Issue 1 by Title
Now showing 1 - 20 of 21
Results Per Page
Sort Options
Item Advances in Data‐Driven Analysis and Synthesis of 3D Indoor Scenes(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Patil, Akshay Gadi; Patil, Supriya Gadi; Li, Manyi; Fisher, Matthew; Savva, Manolis; Zhang, Hao; Alliez, Pierre; Wimmer, MichaelThis report surveys advances in deep learning‐based modelling techniques that address four different 3D indoor scene analysis tasks, as well as synthesis of 3D indoor scenes. We describe different kinds of representations for indoor scenes, various indoor scene datasets available for research in the aforementioned areas, and discuss notable works employing machine learning models for such scene modelling tasks based on these representations. Specifically, we focus on the and of 3D indoor scenes. With respect to analysis, we focus on four basic scene understanding tasks – 3D object detection, 3D scene segmentation, 3D scene reconstruction and 3D scene similarity. And for synthesis, we mainly discuss neural scene synthesis works, though also highlighting model‐driven methods that allow for human‐centric, progressive scene synthesis. We identify the challenges involved in modelling scenes for these tasks and the kind of machinery that needs to be developed to adapt to the data representation, and the task setting in general. For each of these tasks, we provide a comprehensive summary of the state‐of‐the‐art works across different axes such as the choice of data representation, backbone, evaluation metric, input, output and so on, providing an organized review of the literature. Towards the end, we discuss some interesting research directions that have the potential to make a direct impact on the way users interact and engage with these virtual scene models, making them an integral part of the metaverse.Item Auxiliary Features‐Guided Super Resolution for Monte Carlo Rendering(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hou, Qiqi; Liu, Feng; Alliez, Pierre; Wimmer, MichaelThis paper investigates super‐resolution to reduce the number of pixels to render and thus speed up Monte Carlo rendering algorithms. While great progress has been made to super‐resolution technologies, it is essentially an ill‐posed problem and cannot recover high‐frequency details in renderings. To address this problem, we exploit high‐resolution auxiliary features to guide super‐resolution of low‐resolution renderings. These high‐resolution auxiliary features can be quickly rendered by a rendering engine and at the same time provide valuable high‐frequency details to assist super‐resolution. To this end, we develop a cross‐modality transformer network that consists of an auxiliary feature branch and a low‐resolution rendering branch. These two branches are designed to fuse high‐resolution auxiliary features with the corresponding low‐resolution rendering. Furthermore, we design Residual Densely Connected Swin Transformer groups to learn to extract representative features to enable high‐quality super‐resolution. Our experiments show that our auxiliary features‐guided super‐resolution method outperforms both super‐resolution methods and Monte Carlo denoising methods in producing high‐quality renderings.Item Curvature‐driven Multi‐stream Network for Feature‐preserving Mesh Denoising(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Zhao, Zhibo; Tang, Wenming; Gong, Yuanhao; Alliez, Pierre; Wimmer, MichaelMesh denoising is a fundamental yet challenging task. Most of the existing data‐driven methods only consider the zero‐order information (vertex location) and first‐order information (face normal). However, higher‐order geometric information (such as curvature) is more descriptive for the shape of the mesh. Therefore, in order to impose such high‐order information, this paper proposes a novel Curvature‐Driven Multi‐Stream Graph Convolutional Neural Network (CDMS‐Net) architecture. CDMS‐Net has three streams, including curvature stream, face normal stream and vertex stream, where the curvature stream focuses on the high‐order Gaussian curvature information. Moreover, CDMS‐Net proposes a novel block based on residual dense connections, which is used as the core component to extract geometric features from meshes. This innovative design improves the performance of feature‐preserving denoising. The plug‐and‐play modular design makes CDMS‐Net easy to be implemented. Multiple sets of ablation study are carried out to verify the rationality of the CDMS‐Net. Our method establishes new state‐of‐the‐art mesh denoising results on publicly available datasets.Item Editorial(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Alliez, Pierre; Wimmer, MichaelItem End‐to‐End Compressed Meshlet Rendering(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Mlakar, D.; Steinberger, M.; Schmalstieg, D.; Alliez, Pierre; Wimmer, MichaelIn this paper, we study rendering of end‐to‐end compressed triangle meshes using modern GPU techniques, in particular, mesh shaders. Our approach allows us to keep unstructured triangle meshes in GPU memory in compressed form and decompress them in shader code just in time for rasterization. Typical previous approaches use a compressed mesh format only for persistent storage and streaming, but must decompress it into GPU memory before submitting it to rendering. In contrast, our approach uses an identical compressed format in both storage and GPU memory. Hence, our compression method effectively reduces the in‐memory requirements of huge triangular meshes and avoids any waiting times on streaming geometry induced by the need for a decompression stage on the CPU. End‐to‐end compression also means that scenes with more geometric detail than previously possible can be made fully resident in GPU memory. Our approach is based on a novel decomposition of meshes into meshlets, . disjoint primitive groups that are compressed individually. Decompression using a mesh shader allows de facto random access on the primitive level, which is important for applications such as selective streaming and fine‐grained visibility computation. We compare our approach to multiple commonly used compressed meshlet formats in terms of required memory and rendering times. The results imply that our approach reduces the required CPU–GPU memory bandwidth, a frequent bottleneck in out‐of‐core rendering.Item Formation‐Aware Planning and Navigation with Corridor Shortest Path Maps(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Sharma, Ritesh; Weiss, Tomer; Kallmann, Marcelo; Alliez, Pierre; Wimmer, MichaelThe need to plan motions for agents with variable shape constraints such as under different formations appears in several virtual and real‐world applications of autonomous agents. In this work, we focus on planning and execution of formation‐aware paths for a group of agents traversing a cluttered environment. The proposed planning framework addresses the trade‐off between being able to enforce a preferable formation when traversing the corridors of the environment, versus accepting to switch to alternative formations requiring less clearance in order to utilize narrower corridors that can lead to a shorter overall path to the final destination. At the planning stage, this trade‐off is addressed with a multi‐layer graph annotated with per‐layer navigation costs and formation transition costs, where each layer represents one formation together with its specific clearance requirement. At the navigation stage, we introduce Corridor Shortest Path Maps (CSPMs), which produce a vector field for guiding agents along the solution corridor, ensuring unobstructed in‐formation navigation in cluttered environments, as well as group motion along lengthwise‐optimal paths in the solution corridor. We also present examples of how our multi‐layer planning framework can be applied to other types of multi‐modal planning problems.Item Guided Exploration of Industrial Sensor Data(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Langer, Tristan; Meyes, Richard; Meisen, Tobias; Alliez, Pierre; Wimmer, MichaelIn recent years, digitization in the industrial sector has increased steadily. Digital data not only allows us to monitor the underlying production process using machine learning methods (anomaly detection, behaviour analysis) but also to understand the underlying production process. Insights from Exploratory Data Analysis (EDA) play an important role in building data‐driven processes because data scientists learn essential characteristics of the data in the context of the domain. Due to the complexity of production processes, it is usually difficult for data scientists to acquire this knowledge by themselves. Hence, they have to rely on continuous close collaboration with domain experts and their acquired domain expertise. However, direct communication does not promote documentation of the knowledge transfer from domain experts to data scientists. In this respect, changing team constellations, for example due to a change in personnel, result in a renewed high level of effort despite the same knowledge transfer problem. As a result, EDA is a cost‐intensive iterative process. We, therefore, investigate a system to extract information from the interactions that domain experts perform during EDA. Our approach relies on recording interactions and system states of an exploration tool and generating guided exploration sessions for domain novices. We implement our approach in a software tool and demonstrate its capabilities using two real‐world use cases from the manufacturing industry. We evaluate its feasibility in a user study to investigate whether domain novices can reproduce the most important insights from domain experts about the datasets of the use cases based on generated EDA sessions. From the results of this study, we conclude the feasibility of our system as participants are able to reproduce on average 86.5% of insights from domain experts.Item Identifying and Visualizing Terrestrial Magnetospheric Topology using Geodesic Level Set Method(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Xiong, Peikun; Fujita, Shigeru; Watanabe, Masakazu; Tanaka, Takashi; Cai, Dongsheng; Alliez, Pierre; Wimmer, MichaelThis study introduces a novel numerical method for identifying and visualizing the terrestrial magnetic field topology in a large‐scale three‐dimensional global MHD (Magneto‐Hydro‐Dynamic) simulation. The (un)stable two‐dimensional manifolds are generated from critical points (CPs) located north and south of the magnetosphere using an improved geodesic level set method. A boundary value problem is solved numerically using a shooting method to forward a new geodesic level set from the previous set. These sets are generated starting from a small circle whose centre is a CP. The level sets are the sets of mesh points that form the magnetic manifold, which determines the magnetic field topology. In this study, a consistent method is proposed to determine the magnetospheric topology. Using this scheme, we successfully visualize a terrestrial magnetospheric field topology and identify its two neutral lines using the global MHD simulation. Our results present a terrestrial topology that agrees well with the recent magnetospheric physics and can help us understand various nonlinear magnetospheric dynamics and phenomena. Our visualization enables us to fill the gaps between current magnetospheric physics that can be observed via satellites and nonlinear dynamics, particularly, the bifurcation theory, in the future.Item Interactive Locomotion Style Control for a Human Character based on Gait Cycle Features(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Kim, Chaelin; Eom, Haekwang; Yoo, Jung Eun; Choi, Soojin; Noh, Junyong; Alliez, Pierre; Wimmer, MichaelThis article introduces a data‐driven locomotion style controller for full‐body human characters using gait cycle features. Based on gait analysis, we define a set of gait features that can represent various locomotion styles as spatio‐temporal patterns within a single gait cycle. We compute the gait features for every single gait cycle in motion capture data and use them to search for the desired motion. Our real‐time style controller provides users with visual feedback for the changing inputs, exploiting the Motion Matching algorithm. We also provide a graphical controller interface that visualizes our style representation to enable intuitive control for users. We show that the proposed method is capable of retrieving appropriate locomotions for various gait cycle features, from simple walking motions to single‐foot motions such as hopping and dragging. To validate the effectiveness of our method, we conducted a user study that compares the usability and performance of our system with those of an existing footstep animation tool. The results show that our method is preferred over the baseline method for intuitive control and fast visual feedback.Item Issue Information(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024)Item Neural Path Sampling for Rendering Pure Specular Light Transport(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Yu, Rui; Dong, Yue; Kong, Youkang; Tong, Xin; Alliez, Pierre; Wimmer, MichaelMulti‐bounce, pure specular light paths produce complex lighting effects, such as caustics and sparkle highlights, which are challenging to render due to their sparse and diverse nature. We introduce a learning‐based method for the efficient rendering of pure specular light transport. The key idea is training a neural network to model the distribution of all specular light paths between pairs of endpoints for one specular object. To achieve this, for each object, our method models the distribution of sparse and diverse specular light paths between two endpoints using smooth 2D maps of ray directions from one endpoint and represents these maps with a 2D convolutional network. We design a training scheme to efficiently sample specular light paths from the scene and train the network. Once trained, our method predicts specular light paths for a given pair of endpoints using the network and employs root‐finding‐based algorithms for rendering the specular light transport. Experimental results demonstrate that our method generates high‐quality results, supports dynamic lighting and moving objects within the scene, and significantly enhances the rendering speed of existing techniques.Item Polygon Laplacian Made Robust(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024)Item PPSurf: Combining Patches and Point Convolutions for Detailed Surface Reconstruction(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Erler, Philipp; Fuentes‐Perez, Lizeth; Hermosilla, Pedro; Guerrero, Paul; Pajarola, Renato; Wimmer, Michael; Alliez, Pierre; Wimmer, Michael3D surface reconstruction from point clouds is a key step in areas such as content creation, archaeology, digital cultural heritage and engineering. Current approaches either try to optimize a non‐data‐driven surface representation to fit the points, or learn a data‐driven prior over the distribution of commonly occurring surfaces and how they correlate with potentially noisy point clouds. Data‐driven methods enable robust handling of noise and typically either focus on a or a prior, which trade‐off between robustness to noise on the global end and surface detail preservation on the local end. We propose as a method that combines a global prior based on point convolutions and a local prior based on processing local point cloud patches. We show that this approach is robust to noise while recovering surface details more accurately than the current state‐of‐the‐art. Our source code, pre‐trained model and dataset are available at .Item Quad Mesh Quantization Without a T‐Mesh(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Coudert‐Osmont, Yoann; Desobry, David; Heistermann, Martin; Bommes, David; Ray, Nicolas; Sokolov, Dmitry; Alliez, Pierre; Wimmer, MichaelGrid preserving maps of triangulated surfaces were introduced for quad meshing because the 2D unit grid in such maps corresponds to a sub‐division of the surface into quad‐shaped charts. These maps can be obtained by solving a mixed integer optimization problem: Real variables define the geometry of the charts and integer variables define the combinatorial structure of the decomposition. To make this optimization problem tractable, a common strategy is to ignore integer constraints at first, then to enforce them in a so‐called quantization step. Actual quantization algorithms exploit the geometric interpretation of integer variables to solve an equivalent problem: They consider that the final quad mesh is a sub‐division of a T‐mesh embedded in the surface, and optimize the number of sub‐divisions for each edge of this T‐mesh. We propose to operate on a decimated version of the original surface instead of the T‐mesh. It is easier to implement and to adapt to constraints such as free boundaries, complex feature curves network .Item Real‐time Terrain Enhancement with Controlled Procedural Patterns(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Grenier, C.; Guérin, É.; Galin, É.; Sauvage, B.; Alliez, Pierre; Wimmer, MichaelAssisting the authoring of virtual terrains is a perennial challenge in the creation of convincing synthetic landscapes. Particularly, there is a need for augmenting artist-controlled low-resolution models with consistent relief details.We present a structured noise that procedurally enhances terrains in real time by adding spatially varying erosion patterns. The patterns can be cascaded, i.e. narrow ones are nested into large ones. Our model builds upon the Phasor noise, which we adapt to the specific characteristics of terrains (water flow, slope orientation). Relief details correspond to the underlying terrain characteristics and align with the slope to preserve the coherence of generated landforms. Moreover, our model allows for artist control, providing a palette of control maps, and can be efficiently implemented in graphics hardware, thus allowing for real-time synthesis and rendering, therefore permitting effective and intuitive authoring.Item A Robust Grid‐Based Meshing Algorithm for Embedding Self‐Intersecting Surfaces(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Gagniere, S.; Han, Y.; Chen, Y.; Hyde, D.; Marquez‐Razon, A.; Teran, J.; Fedkiw, R.; Alliez, Pierre; Wimmer, MichaelThe creation of a volumetric mesh representing the interior of an input polygonal mesh is a common requirement in graphics and computational mechanics applications. Most mesh creation techniques assume that the input surface is not self‐intersecting. However, due to numerical and/or user error, input surfaces are commonly self‐intersecting to some degree. The removal of self‐intersection is a burdensome task that complicates workflow and generally slows down the process of creating simulation‐ready digital assets. We present a method for the creation of a volumetric embedding hexahedron mesh from a self‐intersecting input triangle mesh. Our method is designed for efficiency by minimizing use of computationally expensive exact/adaptive precision arithmetic. Although our approach allows for nearly no limit on the degree of self‐intersection in the input surface, our focus is on efficiency in the most common case: many minimal self‐intersections. The embedding hexahedron mesh is created from a uniform background grid and consists of hexahedron elements that are geometrical copies of grid cells. Multiple copies of a single grid cell are used to resolve regions of self‐intersection/overlap. Lastly, we develop a novel topology‐aware embedding mesh coarsening technique to allow for user‐specified mesh resolution as well as a topology‐aware tetrahedralization of the hexahedron mesh.Item Simplified Physical Model‐based Balance‐preserving Motion Re‐targeting for Physical Simulation(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hwang, Jaepyung; Ishii, Shin; Alliez, Pierre; Wimmer, MichaelIn this study, we propose a novel motion re‐targeting framework that provides natural motions of target robot character models similar to the given source motions of a different skeletal structure. The natural target motion requires satisfying kinematic constraints to show a similar motion to the source motion although the kinematical structure between the source and the target character models differ from each other. Simultaneously, the target motion should maintain physically plausible features such as keeping the balance of the target character model. To handle the issue, we utilize a simple physics model (an inverted‐pendulum‐on‐a‐cart model) during the motion re‐targeting process. By interpreting the source motion's balancing property via the pendulum model, the target motion inherits the balancing property of the source motion. The inheritance is derived by performing the motion analysis to extract the necessary parameters for re‐targeting the pendulum model's motion pattern and parameter learning to estimate the suitable parameters for the target character model. Based on the simple physics inheritance, the proposed framework provides balance‐preserving target motions, even applicable to the full‐body physics simulation or a real robot control. We validate the framework by experimenting with motion re‐targeting from animal character and human character source models to the quadruped‐ and humanoid‐type target models with Muaythai punching, kicking and walking motions. We also implement comparisons with the existing methods to clarify the enhancement.Item SkyGAN: Realistic Cloud Imagery for Image‐based Lighting(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Mirbauer, Martin; Rittig, Tobias; Iser, Tomáš; Křivánek, Jaroslav; Šikudová, Elena; Alliez, Pierre; Wimmer, MichaelAchieving photorealism when rendering virtual scenes in movies or architecture visualizations often depends on providing a realistic illumination and background. Typically, spherical environment maps serve both as a natural light source from the Sun and the sky, and as a background with clouds and a horizon. In practice, the input is either a static high‐resolution HDR photograph manually captured on location in real conditions, or an analytical clear sky model that is dynamic, but cannot model clouds. Our approach bridges these two limited paradigms: a user can control the sun position and cloud coverage ratio, and generate a realistically looking environment map for these conditions. It is a hybrid data‐driven analytical model based on a modified state‐of‐the‐art GAN architecture, which is trained on matching pairs of physically‐accurate clear sky radiance and HDR fisheye photographs of clouds. We demonstrate our results on renders of outdoor scenes under varying time, date and cloud covers. Our source code and a dataset of 39 000 HDR sky images are publicly available at .Item State of the Art in Efficient Translucent Material Rendering with BSSRDF(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liang, Shiyu; Gao, Yang; Hu, Chonghao; Zhou, Peng; Hao, Aimin; Wang, Lili; Qin, Hong; Alliez, Pierre; Wimmer, MichaelSub‐surface scattering is always an important feature in translucent material rendering. When light travels through optically thick media, its transport within the medium can be approximated using diffusion theory, and is appropriately described by the bidirectional scattering‐surface reflectance distribution function (BSSRDF). BSSRDF methods rely on assumptions about object geometry and light distribution in the medium, which limits their applicability to general participating media problems. However, despite the high computational cost of path tracing, BSSRDF methods are often favoured due to their suitability for real‐time applications. We review these methods and discuss the most recent breakthroughs in this field. We begin by summarizing various BSSRDF models and then implement most of them in a 2D searchlight problem to demonstrate their differences. We focus on acceleration methods using BSSRDF, which we categorize into two primary groups: pre‐computation and texture methods. Then we go through some related topics, including applications and advanced areas where BSSRDF is used, as well as problems that are sometimes important yet are ignored in sub‐surface scattering estimation. In the end of this survey, we point out remaining constraints and challenges, which may motivate future work to facilitate sub‐surface scattering.Item A Survey of Procedural Modelling Methods for Layout Generation of Virtual Scenes(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Cogo, Emir; Krupalija, Ehlimana; Prazina, Irfan; Bećirović, Šeila; Okanović, Vensada; Rizvić, Selma; Mulahasanović, Razija Turčinhodžić; Alliez, Pierre; Wimmer, MichaelAs virtual worlds continue to rise in popularity, so do the expectations of users for the content of virtual scenes. Virtual worlds must be large in scope and offer enough freedom of movement to keep the audience occupied at all times. For content creators, it is difficult to keep up by manually producing the surrounding content. Therefore, the application of procedural modelling techniques is required. Virtual worlds often mimic the real world, which is composed of organized and connected outdoor and indoor layouts. It is expected that all content is present on the virtual scene and that a user can navigate streets, enter buildings, and interact with furniture within a single virtual world. While there are many procedural methods for generating different layout types, they mostly focus only on one layout type, whereas complete scene generation is greatly underrepresented. This paper aims to identify the coverage of layout types by different methods because similar issues exist for the generation of content of different layout types. When creating a new method for layout generation, it is important to know if the results of existing methods can be appended to other methods. This paper presents a survey of existing procedural modelling methods, which were organized into five categories based on the core approach: pure subdivision, grammar‐based, data‐driven, optimization, and simulation. Information about the covered layout types, the possibility of user interaction during the generation process, and the input and output shape types of the generated content is provided for each surveyed method. The input and output shape types of the generated content can be useful to identify which methods can continue the generation by using the output of other methods as their input. It was concluded that all surveyed methods work for only a few different layout types simultaneously. Moreover, only 35% of the surveyed methods offer interaction with the user after completing the initial process of space generation. Most existing approaches do not perform transformations of shape types. A significant number of methods use the irregular shape type as input and generate the same shape type as the output, which is sufficient for coverage of all layout types when generating a complete virtual world.