44-Issue 7
Permanent URI for this collection
Browse
Browsing 44-Issue 7 by Issue Date
Now showing 1 - 20 of 49
Results Per Page
Sort Options
Item Procedural Multiscale Geometry Modeling using Implicit Functions(The Eurographics Association and John Wiley & Sons Ltd., 2025) Venu, Bojja; Bosak, Adam; Padrón-Griffe, Juan Raúl; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenMaterials exhibit geometric structures across mesoscopic to microscopic scales, influencing macroscale properties such as appearance, mechanical strength, and thermal behavior. Capturing and modeling these multiscale structures is challenging but essential for computer graphics, engineering, and materials science. We present a framework inspired by hypertexture methods, using implicit surfaces and sphere tracing to synthesize multiscale structures on the fly without precomputation. This framework models volumetric materials with particulate, fibrous, porous, and laminar structures, allowing control over size, shape, density, distribution, and orientation. We enhance structural diversity by superimposing implicit periodic functions while improving computational efficiency. The framework also supports spatially varying particulate media, particle agglomeration, and piling on convex and concave structures, such as rock formations (mesoscale), without explicit simulation. We demonstrate its potential in the appearance modeling of volumetric materials and investigate how spatially varying properties affect the perceived macroscale appearance. As a proof of concept, we show that microstructures created by our framework can be reconstructed from image and distance values defined by implicit surfaces, using both first-order and gradient-free optimization methods.Item Multimodal 3D Few-Shot Classification via Gaussian Mixture Discriminant Analysis(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Yiqi; Wu, Huachao; Hu, Ronglei; Chen, Yilin; Zhang, Dejun; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenWhile pre-trained 3D vision-language models are becoming increasingly available, there remains a lack of frameworks that can effectively harness their capabilities for few-shot classification. In this work, we propose PointGMDA, a training-free framework that combines Gaussian Mixture Models (GMMs) with Gaussian Discriminant Analysis (GDA) to perform robust classification using only a few labeled point cloud samples. Our method estimatesGMMparameters per class from support data and computes mixture-weighted prototypes, which are then used in GDA with a shared covariance matrix to construct decision boundaries. This formulation allows us to model intra-class variability more expressively than traditional single-prototype approaches, while maintaining analytical tractability. To incorporate semantic priors, we integrate CLIP-style textual prompts and fuse predictions from geometric and textual modalities through a hybrid scoring strategy. We further introduce PointGMDA-T, a lightweight attention-guided refinement module that learns residuals for fast feature adaptation, improving robustness under distribution shift. Extensive experiments on ModelNet40 and ScanObjectNN demonstrate that PointGMDA outperforms strong baselines across a variety of few-shot settings, with consistent gains under both training-free and fine-tuned conditions. These results highlight the effectiveness and generality of our probabilistic modeling and multimodal adaptation framework. Our code is publicly available at https://github.com/djzgroup/PointGMDA.Item ClothingTwin: Reconstructing Inner and Outer Layers of Clothing Using 3D Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2025) Jung, Munkyung; Lee, Dohae; Lee, In-Kwon; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenWe introduce ClothingTwin, a novel end-to-end framework for reconstructing 3D digital twins of clothing that capture both the outer and inner fabric -without the need for manual mannequin removal. Traditional 2D ''ghost mannequin'' photography techniques remove the mannequin and composite partial inner textures to create images in which the garment appears as if it were worn by a transparent model. However, extending such method to photorealistic 3D Gaussian Splatting (3DGS) is far more challenging. Achieving consistent inner-layer compositing across the large sets of images used for 3DGS optimization quickly becomes impractical if done manually. To address these issues, ClothingTwin introduces three key innovations. First, a specialized image acquisition protocol captures two sets of images for each garment: one worn normally on the mannequin (outer layer exposed) and one worn inside-out (inner layer exposed). This eliminates the need to painstakingly edit out mannequins in thousands of images and provides full coverage of all fabric surfaces. Second, we employ a mesh-guided 3DGS reconstruction for each layer and leverage Non-Rigid Iterative Closest Point (ICP) to align outer and inner point-clouds despite distinct geometries. Third, our enhanced rendering pipeline-featuring mesh-guided back-face culling, back-to-front alpha blending, and recalculated spherical harmonic angles-ensures photorealistic visualization of the combined outer and inner layers without inter-layer artifacts. Experimental evaluations on various garments show that ClothingTwin outperforms conventional 3DGS-based methods, and our ablation study validates the effectiveness of each proposed component.Item FlowCapX: Physics-Grounded Flow Capture with Long-Term Consistency(The Eurographics Association and John Wiley & Sons Ltd., 2025) Tao, Ningxiao; Zhang, Liru; Ni, Xingyu; Chu, Mengyu; Chen, Baoquan; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenWe present FlowCapX, a physics-enhanced framework for flow reconstruction from sparse video inputs, addressing the challenge of jointly optimizing complex physical constraints and sparse observational data over long time horizons. Existing methods often struggle to capture turbulent motion while maintaining physical consistency, limiting reconstruction quality and downstream tasks. Focusing on velocity inference, our approach introduces a hybrid framework that strategically separates representation and supervision across spatial scales. At the coarse level, we resolve sparse-view ambiguities via a novel optimization strategy that aligns long-term observation with physics-grounded velocity fields. By emphasizing vorticity-based physical constraints, our method enhances physical fidelity and improves optimization stability. At the fine level, we prioritize observational fidelity to preserve critical turbulent structures. Extensive experiments demonstrate state-of-the-art velocity reconstruction, enabling velocity-aware downstream tasks, e.g., accurate flow analysis, scene augmentation with tracer visualization and re-simulation. Our implementation is released at https://github.com/taoningxiao/FlowCapX.git.Item View-Independent Wire Art Modeling via Manifold Fitting(The Eurographics Association and John Wiley & Sons Ltd., 2025) Huang, HuiGuang; Wu, Dong-Yi; Wang, Yulin; Cao, Yu; Lee, Tong-Yee; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenThis paper presents a novel fully automated method for generating view-independent abstract wire art from 3D models. The main challenge in creating line art is to strike a balance among abstraction, structural clarity, 3D perception, and consistent aesthetics from different viewpoints. Many existing approaches have been proposed, including extracting wire art from mesh, reconstructing it from pictures, etc. But they all suffer from the fact that the wires are usually very unorganized and cumbersome and usually can only guarantee the observation effect of specific viewpoints. To overcome these problems, we propose a paradigm shift: instead of predicting the line segments directly, we consider the generation of wire art as an optimizationdriven manifold-fitting problem. Thus we can abstract/generalize the 3D model while retaining the key properties necessary for appealing line art, including structural topology and connectivity, and maintain the three-dimensionality of the line art with a multi-perspective view. Experimental results show that our view-independent method outperforms previous methods in terms of line simplicity, shape fidelity, and visual consistency.Item FlatCAD: Fast Curvature Regularization of Neural SDFs for CAD Models(The Eurographics Association and John Wiley & Sons Ltd., 2025) Yin, Haotian; Plocharski, Aleksander; Wlodarczyk, Michal Jan; Kida, Mikolaj; Musialski, Przemyslaw; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenNeural signed-distance fields (SDFs) are a versatile backbone for neural geometry representation, but enforcing CAD-style developability usually requires Gaussian-curvature penalties with full Hessian evaluation and second-order differentiation, which are costly in memory and time. We introduce an off-diagonal Weingarten loss that regularizes only the mixed shape operator term that represents the gap between principal curvatures and flattens the surface. We present two variants: a finitedifference version using six SDF evaluations plus one gradient, and an auto-diff version using a single Hessian-vector product. Both converge to the exact mixed term and preserve the intended geometric properties without assembling the full Hessian. On the ABC benchmarks the losses match or exceed Hessian-based baselines while cutting GPU memory and training time by roughly a factor of two. The method is drop-in and framework-agnostic, enabling scalable curvature-aware SDF learning for engineering-grade shape reconstruction. Our code is available at https://flatcad.github.io/.Item Gaussians on their Way: Wasserstein-Constrained 4D Gaussian Splatting with State-Space Modeling(The Eurographics Association and John Wiley & Sons Ltd., 2025) Deng, Junli; , Ping Shi; Luo, Yihao; Li, Qipei; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenDynamic scene rendering has taken a leap forward with the rise of 4D Gaussian Splatting, but there is still one elusive challenge: how to make 3D Gaussians move through time as naturally as they would in the real world, all while keeping the motion smooth and consistent. In this paper, we present an approach that blends state-space modeling with Wasserstein geometry, enabling a more fluid and coherent representation of dynamic scenes. We introduce a State Consistency Filter that merges prior predictions with the current observations, enabling Gaussians to maintain coherent trajectories over time. We also employ Wasserstein Consistency Constraint to ensure smooth, consistent updates of Gaussian parameters, reducing motion artifacts. Lastly, we leverage Wasserstein geometry to capture both translational motion and shape deformations, creating a more geometrically consistent model for dynamic scenes. Our approach models the evolution of Gaussians along geodesics on the manifold of Gaussian distributions, achieving smoother, more realistic motion and stronger temporal coherence. Experimental results show consistent improvements in rendering quality and efficiency.Item GS-Share: Enabling High-fidelity Map Sharing with Incremental Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhang, Xinran; Zhu, Hanqi; Duan, Yifan; Zhang, Yanyong; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenConstructing and sharing 3D maps is essential for many applications, including autonomous driving and augmented reality. Recently, 3D Gaussian splatting has emerged as a promising approach for accurate 3D reconstruction. However, a practical map-sharing system that features high-fidelity, continuous updates, and network efficiency remains elusive. To address these challenges, we introduce GS-Share, a photorealistic map-sharing system with a compact representation. The core of GS-Share includes anchor-based global map construction, virtual-image-based map enhancement, and incremental map update. We evaluate GS-Share against state-of-the-art methods, demonstrating that our system achieves higher fidelity, particularly for extrapolated views, with improvements of 11%, 22%, and 74% in PSNR, LPIPS, and Depth L1, respectively. Furthermore, GS-Share is significantly more compact, reducing map transmission overhead by 36%.Item Joint Deblurring and 3D Reconstruction for Macrophotography(The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhao, Yifan; Li, Liangchen; Zhou, Yuqi; Wang, Kai; Liang, Yan; Zhang, Juyong; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenMacro lens has the advantages of high resolution and large magnification, and 3D modeling of small and detailed objects can provide richer information. However, defocus blur in macrophotography is a long-standing problem that heavily hinders the clear imaging of the captured objects and high-quality 3D reconstruction of them. Traditional image deblurring methods require a large number of images and annotations, and there is currently no multi-view 3D reconstruction method for macrophotography. In this work, we propose a joint deblurring and 3D reconstruction method for macrophotography. Starting from multi-view blurry images captured, we jointly optimize the clear 3D model of the object and the defocus blur kernel of each pixel. The entire framework adopts a differentiable rendering method to self-supervise the optimization of the 3D model and the defocus blur kernel. Extensive experiments show that from a small number of multi-view images, our proposed method can not only achieve high-quality image deblurring but also recover high-fidelity 3D appearance.Item High-Performance Elliptical Cone Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2025) Emre, Umut; Kanak, Aryan; Steinberg, Shlomi; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenIn this work, we discuss elliptical cone traversal in scenes that employ typical triangular meshes. We derive accurate and numerically-stable intersection tests for an elliptical conic frustum with an AABB, plane, edge and a triangle, and analyze the performance of elliptical cone tracing when using different acceleration data structures: SAH-based K-d trees, BVHs as well as a modern 8-wide BVH variant adapted for cone tracing, and compare with ray tracing. In addition, several cone traversal algorithms are analyzed, and we develop novel heuristics and optimizations that give better performance than previous traversal approaches. The results highlight the difference in performance characteristics between rays and cones, and serve to guide the design of acceleration data structures for applications that employ cone tracing.Item GNF: Gaussian Neural Fields for Multidimensional Signal Representation and Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2025) Bouzidi, Abelaziz; Laga, Hamid; Wannous, Hazem; Sohel, Ferdous; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenNeural fields have emerged as a powerful framework for representing continuous multidimensional signals such as images and videos, 3D and 4D objects and scenes, and radiance fields. While efficient, achieving high-quality representation requires the use of wide and deep neural networks. These, however, are slow to train and evaluate. Although several acceleration techniques have been proposed, they either trade memory for faster training and/or inference, rely on thousands of fitted primitives with considerable optimization time, or compromise the smooth, continuous nature of neural fields. In this paper, we introduce Gaussian Neural Fields (GNF), a novel compact neural decoder that maps learned feature grids into continuous non-linear signals, such as RGB images, Signed Distance Functions (SDFs), and radiance fields, using a single compact layer of Gaussian kernels defined in a high-dimensional feature space. Our key observation is that neurons in traditional MLPs perform simple computations, usually a dot product followed by an activation function, necessitating wide and deep MLPs or high-resolution feature grids to model complex functions. In this paper, we show that replacing MLP-based decoders with Gaussian kernels whose centers are learned features yields highly accurate representations of 2D (RGB), 3D (geometry), and 5D (radiance fields) signals with just a single layer of such kernels. This representation is highly parallelizable, operates on low-resolution grids, and trains in under 15 seconds for 3D geometry and under 11 minutes for view synthesis. GNF matches the accuracy of deep MLP-based decoders with far fewer parameters and significantly higher inference throughput. The source code is publicly available at https://grbfnet.github.io/.Item Computational Design of Body-Supporting Assemblies(The Eurographics Association and John Wiley & Sons Ltd., 2025) He, Yixuan; Chen, Rulin; Deng, Bailin; Song, Peng; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenA body-supporting assembly is an assembly of parts that physically supports a human body during activities like sitting, lying, or leaning. A body-supporting assembly has a complex global shape to support a specific human body posture, yet each component part has a relatively simple geometry to facilitate fabrication, storage, and maintenance. In this paper, we aim to model and design a personalized body-supporting assembly that fits a given human body posture, aiming to make the assembly comfortable to use. We choose to model a body-supporting assembly from scratch to offer high flexibility for fitting a given body posture, which however makes it challenging to determine the assembly's topology and geometry. To address this problem, we classify parts in the assembly into two categories according the functionality: supporting parts for fitting different portions of the body and connecting parts for connecting all the supporting parts to form a stable structure. We also propose a geometric representation of supporting parts such that they can have a variety of shapes controlled by a few parameters. Given a body posture as input, we present a computational approach for designing a body-supporting assembly that fits the posture, in which the supporting parts are initialized and optimized to minimize a discomfort measure and then the connecting parts are generated using a procedural approach. We demonstrate the effectiveness of our approach by designing body-supporting assemblies that accommodate to a variety of body postures and 3D printing two of them for physical validation.Item Swept Volume Computation with Enhanced Geometric Detail Preservation(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wang, Pengfei; Yang, Yuexin; Chen, Shuangmin; Xin, Shiqing; Tu, Changhe; Wang, Wenping; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenSwept volume computation-the determination of regions occupied by moving objects-is essential in graphics, robotics, and manufacturing. Existing approaches either explicitly track surfaces, suffering from robustness issues under complex interactions, or employ implicit representations that trade off geometric fidelity and face optimization difficulties. We propose a novel inversion of motion perspective: rather than tracking object motion, we fix the object and trace spatial points backward in time, reducing complex trajectories to efficiently linearizable point motions. Based on this, we introduce a multi-field tetrahedral framework that maintains multiple distance fileds per element, preserving fine geometric details at trajectory intersections where single-field methods fail. Our method robustly computes swept volumes for diverse motions, including translations and screw motions, and enables practical applications in path planning and collision detection.Item PARC: A Two-Stage Multi-Modal Framework for Point Cloud Completion(The Eurographics Association and John Wiley & Sons Ltd., 2025) Cai, Yujiao; Su, Yuhao; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenPoint cloud completion is vital for accurate 3D reconstruction, yet real world scans frequently exhibit large structural gaps that compromise recovery. Meanwhile, in 2D vision, VAR (Visual Auto-Regression) has demonstrated that a coarse-to-fine ''nextscale prediction'' can significantly improve generation quality, inference speed, and generalization. Because this coarse-to-fine approach closely aligns with the progressive nature of filling missing geometry in point clouds, we were inspired to develop PARC (Patch-Aware Coarse-to-Fine Refinement Completion), a two-stage multimodal framework specifically designed for handling missing structures. In the pretraining stage, PARC leverages complete point clouds alongside a Patch-Aware Coarse-to- Fine Refinement (PAR) strategy and a Mixture-of-Experts (MoE) architecture to generate high-quality local fragments, thereby improving geometric structure understanding and feature representation quality. During finetuning, the model is adapted to partial scans, further enhancing its resilience to incomplete inputs. To address remaining uncertainties in areas with missing structure, we introduce a dual-branch architecture that incorporates image cues: point cloud and image features are extracted independently and then fused via the MoE with an alignment loss, allowing complementary modalities to guide reconstruction in occluded or missing regions. Experiments conducted on the ShapeNet-ViPC dataset show that PARC has achieved highly competitive performance. Code is available at https://github.com/caiyujiaocyj/PARC.Item BoxFusion: Reconstruction-Free Open-Vocabulary 3D Object Detection via Real-Time Multi-View Box Fusion(The Eurographics Association and John Wiley & Sons Ltd., 2025) Lan, Yuqing; Zhu, Chenyang; Gao, Zhirui; Zhang, Jiazhao; Cao, Yihan; Yi, Renjiao; Wang, Yijie; Xu, Kai; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenOpen-vocabulary 3D object detection has gained significant interest due to its critical applications in autonomous driving and embodied AI. Existing detection methods, whether offline or online, typically rely on dense point cloud reconstruction, which imposes substantial computational overhead and memory constraints, hindering real-time deployment in downstream tasks. To address this, we propose a novel reconstruction-free online framework tailored for memory-efficient and real-time 3D detection. Specifically, given streaming posed RGB-D video input, we leverage Cubify Anything as a pre-trained visual foundation model (VFM) for single-view 3D object detection, coupled with CLIP to capture open-vocabulary semantics of detected objects. To fuse all detected bounding boxes across different views into a unified one, we employ an association module for correspondences of multi-views and an optimization module to fuse the 3D bounding boxes of the same instance. The association module utilizes 3D Non-Maximum Suppression (NMS) and a box correspondence matching module. The optimization module uses an IoU-guided efficient random optimization technique based on particle filtering to enforce multi-view consistency of the 3D bounding boxes while minimizing computational complexity. Extensive experiments on CA-1M and ScanNetV2 datasets demonstrate that our method achieves state-of-the-art performance among online methods. Benefiting from this novel reconstruction-free paradigm for 3D object detection, our method exhibits great generalization abilities in various scenarios, enabling real-time perception even in environments exceeding 1000 square meters.Item TopoGen: Topology-Aware 3D Generation with Persistence Points(The Eurographics Association and John Wiley & Sons Ltd., 2025) Hu, Jiangbei; Fei, Ben; Xu, Baixin; Hou, Fei; Wang, Shengfa; Lei, Na; Yang, Weidong; Qian, Chen; He, Ying; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenTopological properties play a crucial role in the analysis, reconstruction, and generation of 3D shapes. Yet, most existing research focuses primarily on geometric features, due to the lack of effective representations for topology. In this paper, we introduce TopoGen, a method that extracts both discrete and continuous topological descriptors-Betti numbers and persistence points-using persistent homology. These features provide robust characterizations of 3D shapes in terms of their topology. We incorporate them as conditional guidance in generative models for 3D shape synthesis, enabling topology-aware generation from diverse inputs such as sparse and partial point clouds, as well as sketches. Furthermore, by modifying persistence points, we can explicitly control and alter the topology of generated shapes. Experimental results demonstrate that TopoGen enhances both diversity and controllability in 3D generation by embedding global topological structure into the synthesis process.Item LucidFusion: Reconstructing 3D Gaussians with Arbitrary Unposed Images(The Eurographics Association and John Wiley & Sons Ltd., 2025) He, Hao; Liang, Yixun; Wang, Luozhou; Cai, Yuanhao; Xu, Xinli; Guo, Haoxiang; Wen, Xiang; Chen, Yingcong; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenRecent large reconstruction models have made notable progress in generating high-quality 3D objects from single images. However, current reconstruction methods often rely on explicit camera pose estimation or fixed viewpoints, restricting their flexibility and practical applicability. We reformulate 3D reconstruction as image-to-image translation and introduce the Relative Coordinate Map (RCM), which aligns multiple unposed images to a ''main'' view without pose estimation. While RCM simplifies the process, its lack of global 3D supervision can yield noisy outputs. To address this, we propose Relative Coordinate Gaussians (RCG) as an extension to RCM, which treats each pixel's coordinates as a Gaussian center and employs differentiable rasterization for consistent geometry and pose recovery. Our LucidFusion framework handles an arbitrary number of unposed inputs, producing robust 3D reconstructions within seconds and paving the way for more flexible, pose-free 3D pipelines.Item A Solver-Aided Hierarchical Language for LLM-Driven CAD Design(The Eurographics Association and John Wiley & Sons Ltd., 2025) Jones, Ben T.; Zhang, Zihan; Hähnlein, Felix; Matusik, Wojciech; Ahmad, Maaz; Kim, Vladimir; Schulz, Adriana; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenParametric CAD systems use domain-specific languages (DSLs) to represent geometry as programs, enabling both flexible modeling and structured editing. With the rise of large language models (LLMs), there is growing interest in generating such programs from natural language. This raises a key question: what kind of DSL best supports both CAD generation and editing, whether performed by a human or an AI? In this work, we introduce AIDL, a hierarchical, solver-aided DSL designed to align with the strengths of LLMs while remaining interpretable and editable by humans. AIDL enables high-level reasoning by breaking problems into abstract components and structural relationships, while offloading low-level geometric reasoning to a constraint solver. We evaluate AIDL in a 2D text-to-CAD setting using a zero-shot prompt-based interface and compare it to OpenSCAD, a widely used CAD DSL that appears in LLM training data. AIDL produces results that are visually competitive and significantly easier to edit. Our findings suggest that language design is a powerful complement to model training and prompt engineering for building collaborative AI-human tools in CAD. Code is available at https://github.com/deGravity/aidl.Item Self-Supervised Humidity-Controllable Garment Simulation via Capillary Bridge Modeling(The Eurographics Association and John Wiley & Sons Ltd., 2025) Shi, Min; Wang, Xinran; Zhang, Jia-Qi; Gao, Lin; Zhu, Dengming; Zhang, Hongyan; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenSimulating wet clothing remains a significant challenge due to the complex physical interactions between moist fabric and the human body, compounded by the lack of dedicated datasets for training data-driven models. Existing self-supervised approaches struggle to capture moisture-induced dynamics such as skin adhesion, anisotropic surface resistance, and non-linear wrinkling, leading to limited accuracy and efficiency. To address this, we present SHGS, a novel self-supervised framework for humidity-controllable clothing simulation grounded in the physical modeling of capillary bridges that form between fabric and skin. We abstract the forces induced by wetness into two physically motivated components: a normal adhesive force derived from Laplace pressure and a tangential shear-resistance force that opposes relative motion along the fabric surface. By formulating these forces as potential energy for conservative effects and as mechanical work for non-conservative effects, we construct a physics-consistent wetness loss. This enables self-supervised training without requiring labeled data of wet clothing. Our humidity-sensitive dynamics are driven by a multi-layer graph neural network, which facilitates a smooth and physically realistic transition between different moisture levels. This architecture decouples the garment's dynamics in wet and dry states through a local weight interpolation mechanism, adjusting the fabric's behavior in response to varying humidity conditions. Experiments demonstrate that SHGS outperforms existing methods in both visual fidelity and computational efficiency, marking a significant advancement in realistic wet-cloth simulation.Item Preconditioned Deformation Grids(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kaltheuner, Julian; Oebel, Alexander; Droege, Hannah; Stotko, Patrick; Klein, Reinhard; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenDynamic surface reconstruction of objects from point cloud sequences is a challenging field in computer graphics. Existing approaches either require multiple regularization terms or extensive training data which, however, lead to compromises in reconstruction accuracy as well as over-smoothing or poor generalization to unseen objects and motions. To address these limitations, we introduce Preconditioned Deformation Grids, a novel technique for estimating coherent deformation fields directly from unstructured point cloud sequences without requiring or forming explicit correspondences. Key to our approach is the use of multi-resolution voxel grids that capture the overall motion at varying spatial scales, enabling a more flexible deformation representation. In conjunction with incorporating grid-based Sobolev preconditioning into gradient-based optimization, we show that applying a Chamfer loss between the input point clouds as well as to an evolving template mesh is sufficient to obtain accurate deformations. To ensure temporal consistency along the object surface, we include a weak isometry loss on mesh edges which complements the main objective without constraining deformation fidelity. Extensive evaluations demonstrate that our method achieves superior results, particularly for long sequences, compared to state-of-the-art techniques.
- «
- 1 (current)
- 2
- 3
- »