43-Issue 8
Permanent URI for this collection
Browse
Browsing 43-Issue 8 by Issue Date
Now showing 1 - 20 of 23
Results Per Page
Sort Options
Item PartwiseMPC: Interactive Control of Contact-Guided Motions(The Eurographics Association and John Wiley & Sons Ltd., 2024) Khoshsiyar, Niloofar; Gou, Ruiyu; Zhou, Tianhong; Andrews, Sheldon; Panne, Michiel van de; Skouras, Melina; Wang, HePhysics-based character motions remain difficult to create and control.We make two contributions towards simpler specification and faster generation of physics-based control. First, we introduce a novel partwise model predictive control (MPC) method that exploits independent planning for body parts when this proves beneficial, while defaulting to whole-body motion planning when that proves to be more effective. Second, we introduce a new approach to motion specification, based on specifying an ordered set of contact keyframes. These each specify a small number of pairwise contacts between the body and the environment, and serve as loose specifications of motion strategies. Unlike regular keyframes or traditional trajectory optimization constraints, they are heavily under-constrained and have flexible timing. We demonstrate a range of challenging contact-rich motions that can be generated online at interactive rates using this framework. We further show the generalization capabilities of the method.Item Learning to Move Like Professional Counter-Strike Players(The Eurographics Association and John Wiley & Sons Ltd., 2024) Durst, David; Xie, Feng; Sarukkai, Vishnu; Shacklett, Brennan; Frosio, Iuri; Tessler, Chen; Kim, Joohwan; Taylor, Carly; Bernstein, Gilbert; Choudhury, Sanjiban; Hanrahan, Pat; Fatahalian, Kayvon; Skouras, Melina; Wang, HeIn multiplayer, first-person shooter games like Counter-Strike: Global Offensive (CS:GO), coordinated movement is a critical component of high-level strategic play. However, the complexity of team coordination and the variety of conditions present in popular game maps make it impractical to author hand-crafted movement policies for every scenario. We show that it is possible to take a data-driven approach to creating human-like movement controllers for CS:GO. We curate a team movement dataset comprising 123 hours of professional game play traces, and use this dataset to train a transformer-based movement model that generates human-like team movement for all players in a ''Retakes'' round of the game. Importantly, the movement prediction model is efficient. Performing inference for all players takes less than 0.5 ms per game step (amortized cost) on a single CPU core, making it plausible for use in commercial games today. Human evaluators assess that our model behaves more like humans than both commercially-available bots and procedural movement controllers scripted by experts (16% to 59% higher by TrueSkill rating of ''human-like''). Using experiments involving in-game bot vs. bot self-play, we demonstrate that our model performs simple forms of teamwork, makes fewer common movement mistakes, and yields movement distributions, player lifetimes, and kill locations similar to those observed in professional CS:GO match play.Item Generalized eXtended Finite Element Method for Deformable Cutting via Boolean Operations(The Eurographics Association and John Wiley & Sons Ltd., 2024) Ton-That, Quoc-Minh; Kry, Paul G.; Andrews, Sheldon; Skouras, Melina; Wang, HeTraditional mesh-based methods for cutting deformable bodies rely on modifying the simulation mesh by deleting, duplicating, deforming or subdividing its elements. Unfortunately, such topological changes eventually lead to instability, reduced accuracy, or computational efficiency challenges. Hence, state of the art algorithms favor the extended finite element method (XFEM), which decouples the cut geometry from the simulation mesh, allowing for stable and accurate cuts at an additional computational cost that is local to the cut region. However, in the 3-dimensional setting, current XFEM frameworks are limited by the cutting configurations that they support. In particular, intersecting cuts are either prohibited or require sophisticated special treatment. Our work presents a general XFEM formulation that is applicable to the 1-, 2-, and 3-dimensional setting without sacrificing the desirable properties of the method. In particular, we propose a generalized enrichment which supports multiple intersecting cuts of various degrees of non-linearity by leveraging recent advances in robust mesh-Boolean technology. This novel strategy additionally enables analytic discontinuous integration schemes required to compute mass, force and elastic energy. We highlight the simplicity, expressivity and accuracy of our XFEM implementation across various scenarios in which intersecting cutting patterns are featured.Item Creating a 3D Mesh in A-pose from a Single Image for Character Rigging(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lee, Seunghwan; Liu, C. Karen; Skouras, Melina; Wang, HeLearning-based methods for 3D content generation have shown great potential to create 3D characters from text prompts, videos, and images. However, current methods primarily focus on generating static 3D meshes, overlooking the crucial aspect of creating an animatable 3D meshes. Directly using 3D meshes generated by existing methods to create underlying skeletons for animation presents many challenges because the generated mesh might exhibit geometry artifacts or assume arbitrary poses that complicate the subsequent rigging process. This work proposes a new framework for generating a 3D animatable mesh from a single 2D image depicting the character. We do so by enforcing the generated 3D mesh to assume an A-pose, which can mitigate the geometry artifacts and facilitate the use of existing automatic rigging methods. Our approach aims to leverage the generative power of existing models across modalities without the need for new data or large-scale training. We evaluate the effectiveness of our framework with qualitative results, as well as ablation studies and quantitative comparisons with existing 3D mesh generation models.Item Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhao, Qingqing; Li, Peizhuo; Yifan, Wang; Sorkine-Hornung, Olga; Wetzstein, Gordon; Skouras, Melina; Wang, HeCreating plausible motions for a diverse range of characters is a long-standing goal in computer graphics. Current learningbased motion synthesis methods rely on large-scale motion datasets, which are often difficult if not impossible to acquire. On the other hand, pose data is more accessible, since static posed characters are easier to create and can even be extracted from images using recent advancements in computer vision. In this paper, we tap into this alternative data source and introduce a neural motion synthesis approach through retargeting, which generates plausible motion of various characters that only have pose data by transferring motion from one single existing motion capture dataset of another drastically different characters. Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist-created poses to noisy poses estimated directly from images. Additionally, a conducted user study indicated that a majority of participants found our retargeted motion to be more enjoyable to watch, more lifelike in appearance, and exhibiting fewer artifacts. Our code and dataset can be accessed here.Item Reactive Gaze during Locomotion in Natural Environments(The Eurographics Association and John Wiley & Sons Ltd., 2024) Melgaré, Julia K.; Rohmer, Damien; Musse, Soraia R.; Cani, Marie-Paule; Skouras, Melina; Wang, HeAnimating gaze behavior is crucial for creating believable virtual characters, providing insights into their perception and interaction with the environment. In this paper, we present an efficient yet natural-looking gaze animation model applicable to real-time walking characters exploring natural environments. We address the challenge of dynamic gaze adaptation by combining findings from neuroscience with a data-driven saliency model. Specifically, our model determines gaze focus by considering the character's locomotion, environment stimuli, and terrain conditions. Our model is compatible with both automatic navigation through pre-defined character trajectories and user-guided interactive locomotion, and can be configured according to the desired degree of visual exploration of the environment. Our perceptual evaluation shows that our solution significantly improves the state-of-the-art saliency-based gaze animation with respect to the character's apparent awareness of the environment, the naturalness of the motion, and the elements to which it pays attention.Item A Multi-layer Solver for XPBD(The Eurographics Association and John Wiley & Sons Ltd., 2024) Mercier-Aubin, Alexandre; Kry, Paul G.; Skouras, Melina; Wang, HeWe present a novel multi-layer method for extended position-based dynamics that exploits a sequence of reduced models consisting of rigid and elastic parts to speed up convergence. Taking inspiration from concepts like adaptive rigidification and long-range constraints, we automatically generate different rigid bodies at each layer based on the current strain rate. During the solve, the rigid bodies provide coupling between progressively less distant vertices during layer iterations, and therefore the fully elastic iterations at the final layer start from a lower residual error. Our layered approach likewise helps with the treatment of contact, where the mixed solves of both rigid and elastic in the layers permit fast propagation of impacts. We show several experiments that guide the selection of parameters of the solver, including the number of layers, the iterations per layers, as well as the choice of rigid patterns. Overall, our results show lower compute times for achieving a desired residual reduction across a variety of simulation models and scenarios.Item Long-term Motion In-betweening via Keyframe Prediction(The Eurographics Association and John Wiley & Sons Ltd., 2024) Hong, Seokhyeon; Kim, Haemin; Cho, Kyungmin; Noh, Junyong; Skouras, Melina; Wang, HeMotion in-betweening has emerged as a promising approach to enhance the efficiency of motion creation due to its flexibility and time performance. However, previous in-betweening methods are limited to generating short transitions due to growing pose ambiguity when the number of missing frames increases. This length-related constraint makes the optimization hard and it further causes another constraint on the target pose, limiting the degrees of freedom for artists to use. In this paper, we introduce a keyframe-driven approach that effectively solves the pose ambiguity problem, allowing robust in-betweening performance on various lengths of missing frames. To incorporate keyframe-driven motion synthesis, we introduce a keyframe score that measures the likelihood of a frame being used as a keyframe as well as an adaptive keyframe selection method that maintains appropriate temporal distances between resulting keyframes. Additionally, we employ phase manifolds to further resolve the pose ambiguity and incorporate trajectory conditions to guide the approximate movement of the character. Comprehensive evaluations, encompassing both quantitative and qualitative analyses, were conducted to compare our method with state-of-theart in-betweening approaches across various transition lengths. The code for the paper is available at https://github. com/seokhyeonhong/long-mibItem Garment Animation NeRF with Color Editing(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Renke; Zhang, Meng; Li, Jun; Yang, Jian; Skouras, Melina; Wang, HeGenerating high-fidelity garment animations through traditional workflows, from modeling to rendering, is both tedious and expensive. These workflows often require repetitive steps in response to updates in character motion, rendering viewpoint changes, or appearance edits. Although recent neural rendering offers an efficient solution for computationally intensive processes, it struggles with rendering complex garment animations containing fine wrinkle details and realistic garment-and-body occlusions, while maintaining structural consistency across frames and dense view rendering. In this paper, we propose a novel approach to directly synthesize garment animations from body motion sequences without the need for an explicit garment proxy. Our approach infers garment dynamic features from body motion, providing a preliminary overview of garment structure. Simultaneously, we capture detailed features from synthesized reference images of the garment's front and back, generated by a pre-trained image model. These features are then used to construct a neural radiance field that renders the garment animation video. Additionally, our technique enables garment recoloring by decomposing its visual elements. We demonstrate the generalizability of our method across unseen body motions and camera views, ensuring detailed structural consistency. Furthermore, we showcase its applicability to color editing on both real and synthetic garment data. Compared to existing neural rendering techniques, our method exhibits qualitative and quantitative improvements in garment dynamics and wrinkle detail modeling. Code is available at https://github.com/wrk226/GarmentAnimationNeRF.Item ADAPT: AI-Driven Artefact Purging Technique for IMU Based Motion Capture(The Eurographics Association and John Wiley & Sons Ltd., 2024) Schreiner, Paul; Netterstrøm, Rasmus; Yin, Hang; Darkner, Sune; Erleben, Kenny; Skouras, Melina; Wang, HeWhile IMU based motion capture offers a cost-effective alternative to premium camera-based systems, it often falls short in matching the latter's realism. Common distortions, such as self-penetrating body parts, foot skating, and floating, limit the usability of these systems, particularly for high-end users. To address this, we employed reinforcement learning to train an AI agent that mimics erroneous sample motion. Since our agent operates within a simulated environment, it inherently avoids generating these distortions since it must adhere to the laws of physics. Impressively, the agent manages to mimic the sample motions while preserving their distinctive characteristics. We assessed our method's efficacy across various types of input data, showcasing an ideal blend of artefact-laden IMU-based data with high-grade optical motion capture data. Furthermore, we compared the configuration of observation and action spaces with other implementations, pinpointing the most suitable configuration for our purposes. All our models underwent rigorous evaluation using a spectrum of quantitative metrics complemented by a qualitative review. These evaluations were performed using a benchmark dataset of IMU-based motion data from actors not included in the training data.Item Strongly Coupled Simulation of Magnetic Rigid Bodies(The Eurographics Association and John Wiley & Sons Ltd., 2024) Westhofen, Lukas; Fernández-Fernández, José Antonio; Jeske, Stefan Rhys; Bender, Jan; Skouras, Melina; Wang, HeWe present a strongly coupled method for the robust simulation of linear magnetic rigid bodies. Our approach describes the magnetic effects as part of an incremental potential function. This potential is inserted into the reformulation of the equations of motion for rigid bodies as an optimization problem. For handling collision and friction, we lean on the Incremental Potential Contact (IPC) method. Furthermore, we provide a novel, hybrid explicit / implicit time integration scheme for the magnetic potential based on a distance criterion. This reduces the fill-in of the energy Hessian in cases where the change in magnetic potential energy is small, leading to a simulation speedup without compromising the stability of the system. The resulting system yields a strongly coupled method for the robust simulation of magnetic effects. We showcase the robustness in theory by analyzing the behavior of the magnetic attraction against the contact resolution. Furthermore, we display stability in practice by simulating exceedingly strong and arbitrarily shaped magnets. The results are free of artifacts like bouncing for time step sizes larger than with the equivalent weakly coupled approach. Finally, we showcase the utility of our method in different scenarios with complex joints and numerous magnets.Item Diffusion-based Human Motion Style Transfer with Semantic Guidance(The Eurographics Association and John Wiley & Sons Ltd., 2024) Hu, Lei; Zhang, Zihao; Ye, Yongjing; Xu, Yiwen; Xia, Shihong; Skouras, Melina; Wang, He3D Human motion style transfer is a fundamental problem in computer graphic and animation processing. Existing AdaINbased methods necessitate datasets with balanced style distribution and content/style labels to train the clustered latent space. However, we may encounter a single unseen style example in practical scenarios, but not in sufficient quantity to constitute a style cluster for AdaIN-based methods. Therefore, in this paper, we propose a novel two-stage framework for few-shot style transfer learning based on the diffusion model. Specifically, in the first stage, we pre-train a diffusion-based text-to-motion model as a generative prior so that it can cope with various content motion inputs. In the second stage, based on the single style example, we fine-tune the pre-trained diffusion model in a few-shot manner to make it capable of style transfer. The key idea is regarding the reverse process of diffusion as a motion-style translation process since the motion styles can be viewed as special motion variations. During the fine-tuning for style transfer, a simple yet effective semantic-guided style transfer loss coordinated with style example reconstruction loss is introduced to supervise the style transfer in CLIP semantic space. The qualitative and quantitative evaluations demonstrate that our method can achieve state-of-the-art performance and has practical applications. The source code is available at https://github.com/hlcdyy/diffusion-based-motion-style-transfer.Item Reconstruction of Implicit Surfaces from Fluid Particles Using Convolutional Neural Networks(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhao, Chen; Shinar, Tamar; Schroeder, Craig; Skouras, Melina; Wang, HeIn this paper, we present a novel network-based approach for reconstructing signed distance functions from fluid particles. The method uses a weighting kernel to transfer particles to a regular grid, which forms the input to a convolutional neural network. We propose a regression-based regularization to reduce surface noise without penalizing high-curvature features. The reconstruction exhibits improved spatial surface smoothness and temporal coherence compared with existing state of the art surface reconstruction methods. The method is insensitive to particle sampling density and robustly handles thin features, isolated particles, and sharp edges.Item Eurographics/ ACM SIGGRAPH Symposium on Computer Animation 2024 - CGF 43-8: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2024) Skouras, Melina; Wang, He; Skouras, Melina; Wang, HeItem Unerosion: Simulating Terrain Evolution Back in Time(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yang, Zhanyu; Cordonnier, Guillaume; Cani, Marie-Paule; Perrenoud, Christian; Benes, Bedrich; Skouras, Melina; Wang, HeWhile the past of terrain cannot be known precisely because an effect can result from many different causes, exploring these possible pasts opens the way to numerous applications ranging from movies and games to paleogeography. We introduce unerosion, an attempt to recover plausible past topographies from an input terrain represented as a height field. Our solution relies on novel algorithms for the backward simulation of different processes: fluvial erosion, sedimentation, and thermal erosion. This is achieved by re-formulating the equations of erosion and sedimentation so that they can be simulated back in time. These algorithms can be combined to account for a succession of climate changes backward in time, while the possible ambiguities provide editing options to the user. Results show that our solution can approximately reverse different types of erosion while enabling users to explore a variety of alternative pasts. Using a chronology of climatic periods to inform us about the main erosion phenomena, we also went back in time using real measured terrain data. We checked the consistency with geological findings, namely the height of river beds hundreds of thousands of years ago.Item Robust and Artefact-Free Deformable Contact with Smooth Surface Representations(The Eurographics Association and John Wiley & Sons Ltd., 2024) Du, Yinwei; Li, Yue; Coros, Stelian; Thomaszewski, Bernhard; Skouras, Melina; Wang, HeModeling contact between deformable solids is a fundamental problem in computer animation, mechanical design, and robotics. Existing methods based on C0-discretizations-piece-wise linear or polynomial surfaces-suffer from discontinuities and irregularities in tangential contact forces, which can significantly affect simulation outcomes and even prevent convergence. In this work, we show that these limitations can be overcome with a smooth surface representation based on Implicit Moving Least Squares (IMLS). In particular, we propose a self collision detection scheme tailored to IMLS surfaces that enables robust and efficient handling of challenging self contacts. Through a series of test cases, we show that our approach offers advantages over existing methods in terms of accuracy and robustness for both forward and inverse problems.Item SketchAnim: Real-time Sketch Animation Transfer from Videos(The Eurographics Association and John Wiley & Sons Ltd., 2024) Rai, Gaurav; Gupta, Shreyas; Sharma, Ojaswa; Skouras, Melina; Wang, HeAnimation of hand-drawn sketches is an adorable art. It allows the animator to generate animations with expressive freedom and requires significant expertise. In this work, we introduce a novel sketch animation framework designed to address inherent challenges, such as motion extraction, motion transfer, and occlusion. The framework takes an exemplar video input featuring a moving object and utilizes a robust motion transfer technique to animate the input sketch. We show comparative evaluations that demonstrate the superior performance of our method over existing sketch animation techniques. Notably, our approach exhibits a higher level of user accessibility in contrast to conventional sketch-based animation systems, positioning it as a promising contributor to the field of sketch animation. https://graphics-research-group.github.io/SketchAnim/Item LLAniMAtion: LLAMA Driven Gesture Animation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Windle, Jonathan; Matthews, Iain; Taylor, Sarah; Skouras, Melina; Wang, HeCo-speech gesturing is an important modality in conversation, providing context and social cues. In character animation, appropriate and synchronised gestures add realism, and can make interactive agents more engaging. Historically, methods for automatically generating gestures were predominantly audio-driven, exploiting the prosodic and speech-related content that is encoded in the audio signal. In this paper we instead experiment with using Large-Language Model (LLM) features for gesture generation that are extracted from text using LLAMA2. We compare against audio features, and explore combining the two modalities in both objective tests and a user study. Surprisingly, our results show that LLAMA2 features on their own perform significantly better than audio features and that including both modalities yields no significant difference to using LLAMA2 features in isolation. We demonstrate that the LLAMA2 based model can generate both beat and semantic gestures without any audio input, suggesting LLMs can provide rich encodings that are well suited for gesture generation.Item Generating Flight Summaries Conforming to Cinematographic Principles(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lino, Christophe; Cani, Marie-Paule; Skouras, Melina; Wang, HeWe propose an automatic method for generating flight summaries of prescribed duration, given any planed 3D trajectory of a flying object. The challenge is to select relevant time-ellipses, while keeping and adequately framing the most interesting parts of the trajectory, and enforcing cinematographic rules between the selected shots. Our solution optimizes the visual quality of the output video both in terms of camera view and film editing choices, thanks to a new optimization technique, designed to jointly optimize the selection of the interesting parts of a flight, and the camera animation parameters over time. To our best knowledge, this solution is the first one to address camera control, film editing, and trajectory summarizing at once. Ablation studies demonstrate the visual quality of the flights summaries we generate compared to alternative methods.Item VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical Characters(The Eurographics Association and John Wiley & Sons Ltd., 2024) Serifi, Agon; Grandia, Ruben; Knoop, Espen; Gross, Markus; Bächer, Moritz; Skouras, Melina; Wang, HeRecent progress in physics-based character control has made it possible to learn policies from unstructured motion data. However, it remains challenging to train a single control policy that works with diverse and unseen motions, and can be deployed to real-world physical robots. In this paper, we propose a two-stage technique that enables the control of a character with a full-body kinematic motion reference, with a focus on imitation accuracy. In a first stage, we extract a latent space encoding by training a variational autoencoder, taking short windows of motion from unstructured data as input. We then use the embedding from the time-varying latent code to train a conditional policy in a second stage, providing a mapping from kinematic input to dynamics-aware output. By keeping the two stages separate, we benefit from self-supervised methods to get better latent codes and explicit imitation rewards to avoid mode collapse. We demonstrate the efficiency and robustness of our method in simulation, with unseen user-specified motions, and on a bipedal robot, where we bring dynamic motions to the real world.