SC20 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Challenges of GPU-Aware Communication in MPI


Workshop:ExaMPI: Workshop on Exascale MPI

Authors: Nathan Hanford, Ramesh Pankajakshan, Edgar A. Leon, and Ian Karlin (Lawrence Livermore National Laboratory)


Abstract: GPUs are increasingly popular in HPC systems and applications. The communication bottleneck between GPUs, distributed across HPC nodes within a cluster, has, however, limited achievable scalability of GPU-centric applications. Advances in inter-node GPU communication such as NVIDIA's GPUDirect have made great strides in addressing this issue. The added software development complexity has been addressed by simplified GPU programming paradigms such as unified or managed memory. To understand the performance of these new features, new benchmarks were developed. Unfortunately, these benchmark efforts do not include correctness checking and certain messaging patterns used in applications. In this paper we highlight important gaps in communication benchmarks and motivate a methodology to help application developers understand the performance tradeoffs of different data movement options. Furthermore, we share systems tuning and deployment experiences across different GPU-aware MPI implementations. In particular, we demonstrate correctness testing is needed along with performance testing through modifications to an existing benchmark. In addition, we present a case study where existing benchmarks fail to characterize how data is moved within SW4, a seismic wave application, and create a benchmark to model this behavior. Finally, we motivate the need for an application-inspired benchmark methodology to assess system performance and guide application programmers on how to use the system more efficiently.





Back to ExaMPI: Workshop on Exascale MPI Archive Listing



Back to Full Workshop Archive Listing