39-Issue 8
Permanent URI for this collection
Browse
Browsing 39-Issue 8 by Subject "Neural networks"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Cloth and Skin Deformation with a Triangle Mesh Based Convolutional Neural Network(The Eurographics Association and John Wiley & Sons Ltd., 2020) Chentanez, Nuttapong; Macklin, Miles; Müller, Matthias; Jeschke, Stefan; Kim, Tae-Yong; Bender, Jan and Popa, TiberiuWe introduce a triangle mesh based convolutional neural network. The proposed network structure can be used for problems where input and/or output are defined on a manifold triangle mesh with or without boundary. We demonstrate its applications in cloth upsampling, adding back details to Principal Component Analysis (PCA) compressed cloth, regressing clothing deformation from character poses, and regressing hand skin deformation from bones' joint angles. The data used for training in this work are generated from high resolution extended position based dynamics (XPBD) physics simulations with small time steps and high iteration counts and from an offline FEM simulator, but it can come from other sources. The inference time of our prototype implementation, depending on the mesh resolution and the network size, can provide between 4 to 134 times faster than a GPU based simulator. The inference also only needs to be done for meshes currently visible by the camera.Item Intuitive Facial Animation Editing Based On A Generative RNN Framework(The Eurographics Association and John Wiley & Sons Ltd., 2020) Berson, Eloïse; Soladié, Catherine; Stoiber, Nicolas; Bender, Jan and Popa, TiberiuFor the last decades, the concern of producing convincing facial animation has garnered great interest, that has only been accelerating with the recent explosion of 3D content in both entertainment and professional activities. The use of motion capture and retargeting has arguably become the dominant solution to address this demand. Yet, despite high level of quality and automation performance-based animation pipelines still require manual cleaning and editing to refine raw results, which is a time- and skill-demanding process. In this paper, we look to leverage machine learning to make facial animation editing faster and more accessible to non-experts. Inspired by recent image inpainting methods, we design a generative recurrent neural network that generates realistic motion into designated segments of an existing facial animation, optionally following userprovided guiding constraints. Our system handles different supervised or unsupervised editing scenarios such as motion filling during occlusions, expression corrections, semantic content modifications, and noise filtering. We demonstrate the usability of our system on several animation editing use cases.Item Latent Space Subdivision: Stable and Controllable Time Predictions for Fluid Flow(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wiewel, Steffen; Kim, Byungsoo; Azevedo, Vinicius; Solenthaler, Barbara; Thuerey, Nils; Bender, Jan and Popa, TiberiuWe propose an end-to-end trained neural network architecture to robustly predict the complex dynamics of fluid flows with high temporal stability. We focus on single-phase smoke simulations in 2D and 3D based on the incompressible Navier-Stokes (NS) equations, which are relevant for a wide range of practical problems. To achieve stable predictions for long-term flow sequences with linear execution times, a convolutional neural network (CNN) is trained for spatial compression in combination with a temporal prediction network that consists of stacked Long Short-Term Memory (LSTM) layers. Our core contribution is a novel latent space subdivision (LSS) to separate the respective input quantities into individual parts of the encoded latent space domain. As a result, this allows to distinctively alter the encoded quantities without interfering with the remaining latent space values and hence maximizes external control. By selectively overwriting parts of the predicted latent space points, our proposed method is capable to robustly predict long-term sequences of complex physics problems, like the flow of fluids. In addition, we highlight the benefits of a recurrent training on the latent space creation, which is performed by the spatial compression network. Furthermore, we thoroughly evaluate and discuss several different components of our method.Item A Pixel-Based Framework for Data-Driven Clothing(The Eurographics Association and John Wiley & Sons Ltd., 2020) Jin, Ning; Zhu, Yilin; Geng, Zhenglin; Fedkiw, Ron; Bender, Jan and Popa, TiberiuWe propose a novel approach to learning cloth deformation as a function of body pose, recasting the graph-like triangle mesh data structure into image-based data in order to leverage popular and well-developed convolutional neural networks (CNNs) in a two-dimensional Euclidean domain. Then, a three-dimensional animation of clothing is equivalent to a sequence of twodimensional RGB images driven/choreographed by time dependent joint angles. In order to reduce nonlinearity demands on the neural network, we utilize procedural skinning of the body surface to capture much of the rotation/deformation so that the RGB images only contain textures of displacement offsets from skin to clothing. Notably, we illustrate that our approach does not require accurate unclothed body shapes or robust skinning techniques. Additionally, we discuss how standard image based techniques such as image partitioning for higher resolution can readily be incorporated into our framework.