SC20 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Optimizing Deep Learning Recommender Systems Training on CPU Cluster Architectures

Authors: Dhiraj Kalamkar, Evangelos Georganas, Sudarshan Srinivasan, Jianping Chen, Mikhail Shiryaev, and Alexander Heinecke (Intel Corporation)

Abstract: During the last two years, the goal of many researchers has been to squeeze the last bit of performance out of HPC systems for AI tasks. ResNet50 is no longer a representative workload in 2020. Thus, we focus on Recommender Systems, specifically Facebook's DLRM benchmark, which account for most of the AI cycles in cloud computing centers. By enabling it to run on latest CPU hardware and software tailored for HPC, we are able to achieve up to two orders of magnitude improvement in performance on a single socket compared to the reference CPU implementation, and high scaling efficiency up to 64 sockets, while fitting ultra-large datasets. This paper discusses and analyzes novel optimization and parallelization techniques for the various operators in DLRM. Several optimizations (e.g., tensor-contraction accelerated MLPs, framework MPI progression, BFLOAT16 training with up to 1.8x speed-up) are general and transferable to many other deep learning topologies.

Back to Technical Papers Archive Listing