Eurographics Digital Library

This is the DSpace 7 platform of the Eurographics Digital Library.
  • The contents of the Eurographics Digital Library Archive are freely accessible. Only access to the full-text documents of the journal Computer Graphics Forum (joint property of Wiley and Eurographics) is restricted to Eurographics members, people from institutions who have an Institutional Membership at Eurographics, or users of the TIB Hannover. On the item pages you will find so-called purchase links to the TIB Hannover.
  • As a Eurographics member, you can log in with your email address and password from https://services.eg.org. If you are part of an institutional member and you are on a computer with a Eurographics registered IP domain, you can proceed immediately.
  • From 2022, all new releases published by Eurographics will be licensed under Creative Commons. Publishing with Eurographics is Plan-S compliant. Please visit Eurographics Licensing and Open Access Policy for more details.
 

Recent Submissions

Item
Reshadable Impostors with Level-of-Detail for Real-Time Distant Objects Rendering
(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Xiaoloong; Zeng, Zheng; Zhu, Junqiu; Wang, Lu; Wang, Beibei; Wilkie, Alexander
We propose a new image-based representation for real-time distant objects rendering: Reshadable Impostors with Level-of- Detail (RiLoD). By storing compact geometric and material information captured from a few reference views, RiLoD enables reliable forward mapping to generate target views under dynamic lighting and edited material attributes. In addition, it supports seamless transitions across different levels of detail. To support reshading and LoD simultaneously while maintaining a minimal memory footprint and bandwidth requirement, our key design is a compact yet efficient representation that encodes and compresses the necessary material and geometric information in each reference view. To further improve the visual fidelity, we use a reliable forward mapping technique combined with a hole-filling filtering strategy to ensure geometric completeness and shading consistency. We demonstrate the practicality of RiLoD by integrating it into a modern real-time renderer. RiLoD delivers fast performance across a variety of test scenes, supports smooth transitions between levels of detail as the camera moves closer or farther, and avoids the typical artifacts of impostor techniques that result from neglecting the underlying geometry.
Item
Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization
(The Eurographics Association and John Wiley & Sons Ltd., 2025) Subias, Jose Daniel; Daniel-Soriano, Saúl; Gutierrez, Diego; Serrano, Ana; Wang, Beibei; Wilkie, Alexander
Large diffusion models have made a remarkable leap synthesizing high-quality artistic images from text descriptions. However, these powerful pre-trained models still lack control to guide key material appearance properties, such as gloss. In this work, we present a threefold contribution: (1) we analyze how gloss is perceived across different artistic styles (i.e., oil painting, watercolor, ink pen, charcoal, and soft crayon); (2) we leverage our findings to create a dataset with 1,336,272 stylized images of many different geometries in all five styles, including automatically-computed text descriptions of their appearance (e.g., ''A glossy bunny hand painted with an orange soft crayon''); and (3) we train ControlNet to condition Stable Diffusion XL synthesizing novel painterly depictions of new objects, using simple inputs such as edge maps, hand-drawn sketches, or clip arts. Compared to previous approaches, our framework yields more accurate results despite the simplified input, as we show both quantitative and qualitatively.
Item
Real-time Level-of-detail Strand-based Rendering
(The Eurographics Association and John Wiley & Sons Ltd., 2025) Huang, Tao; Zhou, Yang; Lin, Daqi; Zhu, Junqiu; Yan, Ling-Qi; Wu, Kui; Wang, Beibei; Wilkie, Alexander
We present a real-time strand-based rendering framework that ensures seamless transitions between different level-of-detail (LoD) while maintaining a consistent appearance. We first introduce an aggregated BCSDF model to accurately capture both single and multiple scattering within the cluster for hairs and fibers. Building upon this, we further introduce a LoD framework for hair rendering that dynamically, adaptively, and independently replaces clusters of individual hairs with thick strands based on their projected screen widths. Through tests on diverse hairstyles with various hair colors and animation, as well as knit patches, our framework closely replicates the appearance of multiple-scattered full geometries at various viewing distances, achieving up to a 13× speedup.
Item
VideoMat: Extracting PBR Materials from Video Diffusion Models
(The Eurographics Association and John Wiley & Sons Ltd., 2025) Munkberg, Jacob; Wang, Zian; Liang, Ruofan; Shen, Tianchang; Hasselgren, Jon; Wang, Beibei; Wilkie, Alexander
We leverage finetuned video diffusion models, intrinsic decomposition of videos, and physically-based differentiable rendering to generate high quality materials for 3D models given a text prompt or a single image. We condition a video diffusion model to respect the input geometry and lighting condition. This model produces multiple views of a given 3D model with coherent material properties. Secondly, we use a recent model to extract intrinsics (base color, roughness, metallic) from the generated video. Finally, we use the intrinsics alongside the generated video in a differentiable path tracer to robustly extract PBR materials directly compatible with common content creation tools.
Item
Multiview Geometric Regularization of Gaussian Splatting for Accurate Radiance Fields
(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kim, Jungeon; Park, Geonsoo; Lee, Seungyong; Wang, Beibei; Wilkie, Alexander
Recent methods, such as 2D Gaussian Splatting and Gaussian Opacity Fields, have aimed to address the geometric inaccuracies of 3D Gaussian Splatting while retaining its superior rendering quality. However, these approaches still struggle to reconstruct smooth and reliable geometry, particularly in scenes with significant color variation across viewpoints, due to their per-point appearance modeling and single-view optimization constraints. In this paper, we propose an effective multiview geometric regularization strategy that integrates multiview stereo (MVS) depth, RGB, and normal constraints into Gaussian Splatting initialization and optimization. Our key insight is the complementary relationship between MVS-derived depth points and Gaussian Splatting-optimized positions: MVS robustly estimates geometry in regions of high color variation through local patch-based matching and epipolar constraints, whereas Gaussian Splatting provides more reliable and less noisy depth estimates near object boundaries and regions with lower color variation. To leverage this insight, we introduce a median depthbased multiview relative depth loss with uncertainty estimation, effectively integrating MVS depth information into Gaussian Splatting optimization. We also propose an MVS-guided Gaussian Splatting initialization to avoid Gaussians falling into suboptimal positions. Extensive experiments validate that our approach successfully combines these strengths, enhancing both geometric accuracy and rendering quality across diverse indoor and outdoor scenes.