Browsing by Author "Otaduy, Miguel"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item DYVERSO: A Versatile Multi‐Phase Position‐Based Fluids Solution for VFX(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Alduán, Iván; Tena, Angel; Otaduy, Miguel A.; Chen, Min and Zhang, Hao (Richard)Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands. Our simulation framework handles multi‐phase interactions robustly thanks to a modified constraint formulation for density contrast PBF. And, it also supports the interaction of fluids sampled at different resolutions. We put special care on data structure design and implementation details. Our framework highlights cache‐efficient GPU‐friendly data structures, an improved spatial voxelization technique based on Z‐index sorting, tuned‐up simulation algorithms and two‐way‐coupled collision handling based on VDB fields. Altogether, our fluid simulation framework empowers artists with the efficiency, scalability and versatility needed for simulating very diverse scenes and effects.Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands.Item Sparse GPU Voxelization of Yarn‐Level Cloth(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Lopez‐Moreno, Jorge; Miraut, David; Cirio, Gabriel; Otaduy, Miguel A.; Chen, Min and Zhang, Hao (Richard)Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the Graphics Processing Unit (GPU) for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>10 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the GPU for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>10 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.