SC20 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Keeping It Real: Why HPC Data Services Don't Achieve I/O Microbenchmark Performance


Workshop:Fifth International Parallel Data Systems Workshop

Authors: Philip Carns and Kevin Harms (Argonne National Laboratory (ANL)), Bradley W. Settlemyer and Brian Atkinson (Los Alamos National Laboratory), and Robert B. Ross (Argonne National Laboratory (ANL))


Abstract: HPC storage software developers rely on benchmarks as reference points for performance evaluation. Low-level synthetic microbenchmarks are particularly valuable for isolating performance bottlenecks in complex systems and identifying optimization opportunities.

The use of low-level microbenchmarks also entails risk, however, especially if the benchmark behavior does not reflect the nuances of production data services or applications. In those cases, microbenchmark measurements can lead to unrealistic expectations or misdiagnosis of performance problems. Neither benchmark creators nor software developers are necessarily at fault in this scenario, however. The underlying problem is more often a subtle disconnect between the objective of the benchmark and the objective of the developer.

In this paper, we investigate examples of discrepancies between microbenchmark behavior and software developer expectations. Our goal is to draw attention to these pitfalls and initiate a discussion within the community about how to improve the state of the practice in performance engineering for HPC data services.


Website:






Back to Fifth International Parallel Data Systems Workshop Archive Listing



Back to Full Workshop Archive Listing