Browsing by Author "Kim, Byungsoo"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Deep Fluids: A Generative Network for Parameterized Fluid Simulations(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kim, Byungsoo; Azevedo, Vinicius C.; Thuerey, Nils; Kim, Theodore; Gross, Markus; Solenthaler, Barbara; Alliez, Pierre and Pellacini, FabioThis paper presents a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. Due to the capability of deep learning architectures to learn representative features of the data, our generative model is able to accurately approximate the training data set, while providing plausible interpolated in-betweens. The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence-free velocity fields at all times. In addition, we demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network. Our method models a wide variety of fluid behaviors, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re-sampling, latent space simulations, and compression of fluid simulation data. Reconstructed velocity fields are generated up to 700x faster than re-simulating the data with the underlying CPU solver, while achieving compression rates of up to 1300x.Item Deep Reconstruction of 3D Smoke Densities from Artist Sketches(The Eurographics Association and John Wiley & Sons Ltd., 2022) Kim, Byungsoo; Huang, Xingchang; Wuelfroth, Laura; Tang, Jingwei; Cordonnier, Guillaume; Gross, Markus; Solenthaler, Barbara; Chaine, Raphaëlle; Kim, Min H.Creative processes of artists often start with hand-drawn sketches illustrating an object. Pre-visualizing these keyframes is especially challenging when applied to volumetric materials such as smoke. The authored 3D density volumes must capture realistic flow details and turbulent structures, which is highly non-trivial and remains a manual and time-consuming process. We therefore present a method to compute a 3D smoke density field directly from 2D artist sketches, bridging the gap between early-stage prototyping of smoke keyframes and pre-visualization. From the sketch inputs, we compute an initial volume estimate and optimize the density iteratively with an updater CNN. Our differentiable sketcher is embedded into the end-to-end training, which results in robust reconstructions. Our training data set and sketch augmentation strategy are designed such that it enables general applicability. We evaluate the method on synthetic inputs and sketches from artists depicting both realistic smoke volumes and highly non-physical smoke shapes. The high computational performance and robustness of our method at test time allows interactive authoring sessions of volumetric density fields for rapid prototyping of ideas by novice users.Item Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks(The Eurographics Association, 2020) Biland, Simon; Azevedo, Vinicius C.; Kim, Byungsoo; Solenthaler, Barbara; Wilkie, Alexander and Banterle, FrancescoConvolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised l1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.Item Latent Space Subdivision: Stable and Controllable Time Predictions for Fluid Flow(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wiewel, Steffen; Kim, Byungsoo; Azevedo, Vinicius; Solenthaler, Barbara; Thuerey, Nils; Bender, Jan and Popa, TiberiuWe propose an end-to-end trained neural network architecture to robustly predict the complex dynamics of fluid flows with high temporal stability. We focus on single-phase smoke simulations in 2D and 3D based on the incompressible Navier-Stokes (NS) equations, which are relevant for a wide range of practical problems. To achieve stable predictions for long-term flow sequences with linear execution times, a convolutional neural network (CNN) is trained for spatial compression in combination with a temporal prediction network that consists of stacked Long Short-Term Memory (LSTM) layers. Our core contribution is a novel latent space subdivision (LSS) to separate the respective input quantities into individual parts of the encoded latent space domain. As a result, this allows to distinctively alter the encoded quantities without interfering with the remaining latent space values and hence maximizes external control. By selectively overwriting parts of the predicted latent space points, our proposed method is capable to robustly predict long-term sequences of complex physics problems, like the flow of fluids. In addition, we highlight the benefits of a recurrent training on the latent space creation, which is performed by the spatial compression network. Furthermore, we thoroughly evaluate and discuss several different components of our method.Item Neural Smoke Stylization with Color Transfer(The Eurographics Association, 2020) Christen, Fabienne; Kim, Byungsoo; Azevedo, Vinicius C.; Solenthaler, Barbara; Wilkie, Alexander and Banterle, FrancescoArtistically controlling fluid simulations requires a large amount of manual work by an artist. The recently presented transportbased neural style transfer approach simplifies workflows as it transfers the style of arbitrary input images onto 3D smoke simulations. However, the method only modifies the shape of the fluid but omits color information. In this work, we therefore extend the previous approach to obtain a complete pipeline for transferring shape and color information onto 2D and 3D smoke simulations with neural networks. Our results demonstrate that our method successfully transfers colored style features consistently in space and time to smoke data for different input textures.Item Physics-Informed Neural Corrector for Deformation-based Fluid Control(The Eurographics Association and John Wiley & Sons Ltd., 2023) Tang, Jingwei; Kim, Byungsoo; Azevedo, Vinicius C.; Solenthaler, Barbara; Myszkowski, Karol; Niessner, MatthiasControlling fluid simulations is notoriously difficult due to its high computational cost and the fact that user control inputs can cause unphysical motion. We present an interactive method for deformation-based fluid control. Our method aims at balancing the direct deformations of fluid fields and the preservation of physical characteristics. We train convolutional neural networks with physics-inspired loss functions together with a differentiable fluid simulator, and provide an efficient workflow for flow manipulations at test time. We demonstrate diverse test cases to analyze our carefully designed objectives and show that they lead to physical and eventually visually appealing modifications on edited fluid data.Item Robust Reference Frame Extraction from Unsteady 2D Vector Fields with Convolutional Neural Networks(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kim, Byungsoo; Günther, Tobias; Gleicher, Michael and Viola, Ivan and Leitte, HeikeRobust feature extraction is an integral part of scientific visualization. In unsteady vector field analysis, researchers recently directed their attention towards the computation of near-steady reference frames for vortex extraction, which is a numerically challenging endeavor. In this paper, we utilize a convolutional neural network to combine two steps of the visualization pipeline in an end-to-end manner: the filtering and the feature extraction. We use neural networks for the extraction of a steady reference frame for a given unsteady 2D vector field. By conditioning the neural network to noisy inputs and resampling artifacts, we obtain numerically stabler results than existing optimization-based approaches. Supervised deep learning typically requires a large amount of training data. Thus, our second contribution is the creation of a vector field benchmark data set, which is generally useful for any local deep learning-based feature extraction. Based on Vatistas velocity profile, we formulate a parametric vector field mixture model that we parameterize based on numerically-computed example vector fields in near-steady reference frames. Given the parametric model, we can efficiently synthesize thousands of vector fields that serve as input to our deep learning architecture. The proposed network is evaluated on an unseen numerical fluid flow simulation.