HPC I/O Throughput Bottleneck Analysis with Explainable Local Models
SessionAI for IT
Event Type
Paper
File Systems and I/O
Machine Learning, Deep Learning and Artificial Intelligence
Performance/Productivity Measurement and Evaluation
Resource Management and Scheduling
TP
TimeTuesday, 17 November 20204pm - 4:30pm EDT
LocationTrack 3
DescriptionWith the growing complexity of high-performance computing (HPC) systems, achieving high performance can be difficult because of I/O bottlenecks. We analyze multiple years worth of Darshan logs from the Argonne Leadership Computing Facility's Theta supercomputer in order to understand causes of poor I/O throughput. We present Gauge: a data-driven diagnostic tool for exploring the latent space of supercomputing job features, understanding behaviors of clusters of jobs and interpreting I/O bottlenecks. By finding groups of jobs that at first sight are highly heterogeneous but share certain behaviors, and analyzing these groups instead of individual jobs, we reduce the workload of domain experts and automate I/O performance analysis. We conduct a case study where a system owner using Gauge was able to arrive at several clusters that do not conform to conventional I/O behaviors, as well as find several potential improvements, both on the application level and the system level.
Download PDF