Browsing by Author "Casas, Dan"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item CEIG 2019: Frontmatter(Eurographics Association, 2019) Casas, Dan; Jarabo, Adrián; Casas, Dan and Jarabo, AdriánItem Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On(The Eurographics Association and John Wiley & Sons Ltd., 2020) Vidaurre, Raquel; Santesteban, Igor; Garces, Elena; Casas, Dan; Bender, Jan and Popa, TiberiuWe present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network. In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments, represented as parametric predefined 2D panels with arbitrary mesh topology, including long dresses, shirts, and tight tops. Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing: garment type, target body shape, and material. Specifically, we first learn a regressor that predicts the 3D drape of the input parametric garment when worn by a mean body shape. Then, after a mesh topology optimization step where we generate a sufficient level of detail for the input garment type, we further deform the mesh to reproduce deformations caused by the target body shape. Finally, we predict fine-scale details such as wrinkles that depend mostly on the garment material. We qualitatively and quantitatively demonstrate that our fully convolutional approach outperforms existing methods in terms of generalization capabilities and memory requirements, and therefore it opens the door to more general learning-based models for virtual try-on applications.Item HandFlow: Quantifying View-Dependent 3D Ambiguity in Two-Hand Reconstruction with Normalizing Flow(The Eurographics Association, 2022) Wang, Jiayi; Luvizon, Diogo; Mueller, Franziska; Bernard, Florian; Kortylewski, Adam; Casas, Dan; Theobalt, Christian; Bender, Jan; Botsch, Mario; Keim, Daniel A.Reconstructing two-hand interactions from a single image is a challenging problem due to ambiguities that stem from projective geometry and heavy occlusions. Existing methods are designed to estimate only a single pose, despite the fact that there exist other valid reconstructions that fit the image evidence equally well. In this paper we propose to address this issue by explicitly modeling the distribution of plausible reconstructions in a conditional normalizing flow framework. This allows us to directly supervise the posterior distribution through a novel determinant magnitude regularization, which is key to varied 3D hand pose samples that project well into the input image. We also demonstrate that metrics commonly used to assess reconstruction quality are insufficient to evaluate pose predictions under such severe ambiguity. To address this, we release the first dataset with multiple plausible annotations per image called MultiHands. The additional annotations enable us to evaluate the estimated distribution using the maximum mean discrepancy metric. Through this, we demonstrate the quality of our probabilistic reconstruction and show that explicit ambiguity modeling is better-suited for this challenging problem.Item How Will It Drape Like? Capturing Fabric Mechanics from Depth Images(The Eurographics Association and John Wiley & Sons Ltd., 2023) Rodriguez-Pardo, Carlos; Prieto-Martín, Melania; Casas, Dan; Garces, Elena; Myszkowski, Karol; Niessner, MatthiasWe propose a method to estimate the mechanical parameters of fabrics using a casual capture setup with a depth camera. Our approach enables to create mechanically-correct digital representations of real-world textile materials, which is a fundamental step for many interactive design and engineering applications. As opposed to existing capture methods, which typically require expensive setups, video sequences, or manual intervention, our solution can capture at scale, is agnostic to the optical appearance of the textile, and facilitates fabric arrangement by non-expert operators. To this end, we propose a sim-to-real strategy to train a learning-based framework that can take as input one or multiple images and outputs a full set of mechanical parameters. Thanks to carefully designed data augmentation and transfer learning protocols, our solution generalizes to real images despite being trained only on synthetic data, hence successfully closing the sim-to-real loop. Key in our work is to demonstrate that evaluating the regression accuracy based on the similarity at parameter space leads to an inaccurate distances that do not match the human perception. To overcome this, we propose a novel metric for fabric drape similarity that operates on the image domain instead on the parameter space, allowing us to evaluate our estimation within the context of a similarity rank. We show that out metric correlates with human judgments about the perception of drape similarity, and that our model predictions produce perceptually accurate results compared to the ground truth parameters.Item Learning-Based Animation of Clothing for Virtual Try-On(The Eurographics Association and John Wiley & Sons Ltd., 2019) Santesteban, Igor; Otaduy, Miguel A.; Casas, Dan; Alliez, Pierre and Pellacini, FabioThis paper presents a learning-based clothing animation method for highly efficient virtual try-on simulation. Given a garment, we preprocess a rich database of physically-based dressed character simulations, for multiple body shapes and animations. Then, using this database, we train a learning-based model of cloth drape and wrinkles, as a function of body shape and dynamics. We propose a model that separates global garment fit, due to body shape, from local garment wrinkles, due to both pose dynamics and body shape. We use a recurrent neural network to regress garment wrinkles, and we achieve highly plausible nonlinear effects, in contrast to the blending artifacts suffered by previous methods. At runtime, dynamic virtual try-on animations are produced in just a few milliseconds for garments with thousands of triangles. We show qualitative and quantitative analysis of results.Item Modeling and Estimation of Nonlinear Skin Mechanics for Animated Avatars(The Eurographics Association and John Wiley & Sons Ltd., 2020) Romero, Cristian; Otaduy, Miguel A.; Casas, Dan; Pérez, Jesús; Panozzo, Daniele and Assarsson, UlfData-driven models of human avatars have shown very accurate representations of static poses with soft-tissue deformations. However they are not yet capable of precisely representing very nonlinear deformations and highly dynamic effects. Nonlinear skin mechanics are essential for a realistic depiction of animated avatars interacting with the environment, but controlling physics-only solutions often results in a very complex parameterization task. In this work, we propose a hybrid model in which the soft-tissue deformation of animated avatars is built as a combination of a data-driven statistical model, which kinematically drives the animation, an FEM mechanical simulation. Our key contribution is the definition of deformation mechanics in a reference pose space by inverse skinning of the statistical model. This way, we retain as much as possible of the accurate static data-driven deformation and use a custom anisotropic nonlinear material to accurately represent skin dynamics. Model parameters including the heterogeneous distribution of skin thickness and material properties are automatically optimized from 4D captures of humans showing soft-tissue deformations.Item PERGAMO: Personalized 3D Garments from Monocular Video(The Eurographics Association and John Wiley & Sons Ltd., 2022) Casado-Elvira, Andrés; Comino Trinidad, Marc; Casas, Dan; Dominik L. Michels; Soeren PirkClothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their deployment; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behavior, and generalizes to unseen body motions extracted from motion capture dataset.Item SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans(The Eurographics Association and John Wiley & Sons Ltd., 2020) Santesteban, Igor; Garces, Elena; Otaduy, Miguel A.; Casas, Dan; Panozzo, Daniele and Assarsson, UlfWe present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion. Datasets to learn such task are scarce and expensive to generate, which makes training models prone to overfitting. At the core of our method there are three key contributions that enable us to model highly realistic dynamics and better generalization capabilities than state-of-the-art methods, while training on the same data. First, a novel motion descriptor that disentangles the standard pose representation by removing subject-specific features; second, a neural-network-based recurrent regressor that generalizes to unseen shapes and motions; and third, a highly efficient nonlinear deformation subspace capable of representing soft-tissue deformations of arbitrary shapes. We demonstrate qualitative and quantitative improvements over existing methods and, additionally, we show the robustness of our method on a variety of motion capture databases.Item Teaching 3D Computer Animation to Non-programming Experts(The Eurographics Association, 2021) Casas, Dan; Sousa Santos, Beatriz and Domik, GittaThis paper describes a Computer Animation course aimed at novice Computer Science and Engineering students with minimal programming skills. We observe that students enrolled in Computer Graphics (and related) undergraduate degrees usually face a Computer Animation subject early in their programs, sometimes even before they develop strong software development and programming skills. This causes that assignments and tasks where students should focus on the Computer Animation concepts, end up in frustration and massive efforts to just get over-complicated developing frameworks running. Instead, we propose a Computer Animation course based on small MATLAB tasks that covers a large range of topics and it is adapted to students with minimal programming skills. For each topic, we provide a brief theoretical summary and links to fundamental literature, as well as a set of hands-on tasks with the necessary source code to get started. A user study shows that students who took this course were able to better focus on the fundamental concepts of the subject, circumventing the need to learn advanced programming skills. Course material is available on a public GitHub repository, and solutions are provided upon request from course tutors.