Optimizing Deep Learning Recommender Systems Training on CPU Cluster Architectures
SessionDistributed Deep Learning
Event Type
Paper
Accelerators, FPGA, and GPUs
Machine Learning, Deep Learning and Artificial Intelligence
Scalable Computing
TP
TimeWednesday, 18 November 202010am - 10:30am EST
LocationTrack 3
DescriptionDuring the last two years, the goal of many researchers has been to squeeze the last bit of performance out of HPC systems for AI tasks. ResNet50 is no longer a representative workload in 2020. Thus, we focus on Recommender Systems, specifically Facebook's DLRM benchmark, which account for most of the AI cycles in cloud computing centers. By enabling it to run on latest CPU hardware and software tailored for HPC, we are able to achieve up to two orders of magnitude improvement in performance on a single socket compared to the reference CPU implementation, and high scaling efficiency up to 64 sockets, while fitting ultra-large datasets. This paper discusses and analyzes novel optimization and parallelization techniques for the various operators in DLRM. Several optimizations (e.g., tensor-contraction accelerated MLPs, framework MPI progression, BFLOAT16 training with up to 1.8x speed-up) are general and transferable to many other deep learning topologies.