Lessons Learned from Large Data Visualization Software Development for the K computer
dc.contributor.author | Nonaka, Jorji | en_US |
dc.contributor.author | Sakamoto, Naohisa | en_US |
dc.contributor.editor | Gillmann, Christina and Krone, Michael and Reina, Guido and Wischgoll, Thomas | en_US |
dc.date.accessioned | 2020-05-24T13:35:11Z | |
dc.date.available | 2020-05-24T13:35:11Z | |
dc.date.issued | 2020 | |
dc.description.abstract | High Performance Computing (HPC) always had a close relationship with visualization as we can remember the landmark report on ''Visualization in Scientific Computing'', which was credited to have coined the term Scientific Visualization (SciVis). K computer, a Japanese flagship HPC system, appeared in 2011 as the most powerful supercomputer in the Top500 list, and as other similar HPC systems in that ranking, it was designed to enable ''Grand Challenge'' scientific computing with unprecedented scale and size. RIKEN Center for Computational Science (RIKEN R-CCS) operated and provided the K computer's computational resources to the HPC community for almost 8 years until it was decommissioned in 2019. Considering that most of the scientific computing results were publicly presented in the form of visual images and movies, we can infer that the SciVis was widely applied for assisting the domain scientists with their end-to-end scientific computing workflows. In addition to the traditional visualization applications, various others large data visualization software development were conducted in order to tackle the increased size and amount of the simulation outputs. RIKEN R-CCS participated in some of these development and deployment dealing with several environmental and human factors. Although we have no precise statistics regarding the visualization software usage, in this paper, we would like to present some findings and lessons learned from the large data visualization software development in the K computer environment. | en_US |
dc.description.sectionheaders | Closing | |
dc.description.seriesinformation | VisGap - The Gap between Visualization Research and Visualization Software | |
dc.identifier.doi | 10.2312/visgap.20201113 | |
dc.identifier.isbn | 978-3-03868-125-0 | |
dc.identifier.pages | 77-81 | |
dc.identifier.uri | https://doi.org/10.2312/visgap.20201113 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/visgap20201113 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | Human centered computing | |
dc.subject | Visualization systems and tools | |
dc.subject | Applied computing | |
dc.subject | Physical sciences and engineering | |
dc.subject | Computing methodologies | |
dc.subject | Parallel computing methodologies" | |
dc.title | Lessons Learned from Large Data Visualization Software Development for the K computer | en_US |
Files
Original bundle
1 - 1 of 1