EGPGV21: Eurographics Symposium on Parallel Graphics and Visualization
Permanent URI for this collection
Browse
Browsing EGPGV21: Eurographics Symposium on Parallel Graphics and Visualization by Subject "Scientific visualization"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Evaluation of PyTorch as a Data-Parallel Programming API for GPU Volume Rendering(The Eurographics Association, 2021) Marshak, Nathan X.; Grosset, A. V. Pascal; Knoll, Aaron; Ahrens, James; Johnson, Chris R.; Larsen, Matthew and Sadlo, FilipData-parallel programming (DPP) has attracted considerable interest from the visualization community, fostering major software initiatives such as VTK-m. However, there has been relatively little recent investigation of data-parallel APIs in higherlevel languages such as Python, which could help developers sidestep the need for low-level application programming in C++ and CUDA. Moreover, machine learning frameworks exposing data-parallel primitives, such as PyTorch and TensorFlow, have exploded in popularity, making them attractive platforms for parallel visualization and data analysis. In this work, we benchmark data-parallel primitives in PyTorch, and investigate its application to GPU volume rendering using two distinct DPP formulations: a parallel scan and reduce over the entire volume, and repeated application of data-parallel operators to an array of rays. We find that most relevant DPP primitives exhibit performance similar to a native CUDA library. However, our volume rendering implementation reveals that PyTorch is limited in expressiveness when compared to other DPP APIs. Furthermore, while render times are sufficient for an early ''proof of concept'', memory usage acutely limits scalability.Item Faster RTX-Accelerated Empty Space Skipping using Triangulated Active Region Boundary Geometry(The Eurographics Association, 2021) Wald, Ingo; Zellmann, Stefan; Morrical, Nate; Larsen, Matthew and Sadlo, FilipWe describe a technique for GPU and RTX accelerated space skipping of structured volumes that improves on prior work by replacing clustered proxy boxes with a GPU-extracted triangle mesh that bounds the active regions. Unlike prior methods, our technique avoids costly clustering operations, significantly reduces data structure construction cost, and incurs less overhead when traversing active regions.Item HyLiPoD: Parallel Particle Advection Via a Hybrid of Lifeline Scheduling and Parallelization-Over-Data(The Eurographics Association, 2021) Binyahib, Roba; Pugmire, David; Childs, Hank; Larsen, Matthew and Sadlo, FilipPerformance characteristics of parallel particle advection algorithms can vary greatly based on workload.With this short paper, we build a new algorithm based on results from a previous bake-off study which evaluated the performance of four algorithms on a variety of workloads. Our algorithm, called HyLiPoD, is a ''meta-algorithm,'' i.e., it considers the desired workload to choose from existing algorithms to maximize performance. To demonstrate HyliPoD's benefit, we analyze results from 162 tests including concurrencies of up to 8192 cores, meshes as large as 34 billion cells, and particle counts as large as 300 million. Our findings demonstrate that HyLiPoD's adaptive approach allows it to match the best performance of existing algorithms across diverse workloads.Item Machine Learning-Based Autotuning for Parallel Particle Advection(The Eurographics Association, 2021) Schwartz, Samuel D.; Childs, Hank; Pugmire, David; Larsen, Matthew and Sadlo, FilipData-parallel particle advection algorithms contain multiple controls that affect their execution characteristics and performance, in particular how often to communicate and how much work to perform between communications. Unfortunately, the optimal settings for these controls vary based on workload, and, further, it is not easy to devise straight-forward heuristics that automate calculation of these settings. To solve this problem, we investigate a machine learning-based autotuning approach for optimizing data-parallel particle advection. During a pre-processing step, we train multiple machine learning techniques using a corpus of performance data that includes results across a variety of workloads and control settings. The best performing of these techniques is then used to form an oracle, i.e., a module that can determine good algorithm control settings for a given workload immediately before execution begins. To evaluate this approach, we assessed the ability of seven machine learning models to capture particle advection performance behavior and then ran experiments for 108 particle advection workloads on 64 GPUs of a supercomputer. Our findings show that our machine learning-based oracle achieves good speedups relative to the available gains.Item Scalable In Situ Computation of Lagrangian Representations via Local Flow Maps(The Eurographics Association, 2021) Sane, Sudhanshu; Yenpure, Abhishek; Bujack, Roxana; Larsen, Matthew; Moreland, Kenneth; Garth, Christoph; Johnson, Chris R.; Childs, Hank; Larsen, Matthew and Sadlo, FilipIn situ computation of Lagrangian flow maps to enable post hoc time-varying vector field analysis has recently become an active area of research. However, the current literature is largely limited to theoretical settings and lacks a solution to address scalability of the technique in distributed memory. To improve scalability, we propose and evaluate the benefits and limitations of a simple, yet novel, performance optimization. Our proposed optimization is a communication-free model resulting in local Lagrangian flow maps, requiring no message passing or synchronization between processes, intrinsically improving scalability, and thereby reducing overall execution time and alleviating the encumbrance placed on simulation codes from communication overheads. To evaluate our approach, we computed Lagrangian flow maps for four time-varying simulation vector fields and investigated how execution time and reconstruction accuracy are impacted by the number of GPUs per compute node, the total number of compute nodes, particles per rank, and storage intervals. Our study consisted of experiments computing Lagrangian flow maps with up to 67M particle trajectories over 500 cycles and used as many as 2048 GPUs across 512 compute nodes. In all, our study contributes an evaluation of a communication-free model as well as a scalability study of computing distributed Lagrangian flow maps at scale using in situ infrastructure on a modern supercomputer.