SC20 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Convolutional Neural Network Training with Distributed K-FAC


Authors: J. Gregory Pauloski (University of Texas); Zhao Zhang, Lei Huang, and Weijia Xu (Texas Advanced Computing Center (TACC)); and Ian T. Foster (University of Chicago, Argonne National Laboratory (ANL))

Abstract: Training neural networks with many processors can reduce time-to-solution; it is challenging, however, to maintain convergence and efficiency at large scales. The Kronecker-factored Approximate Curvature (K-FAC) was recently proposed as an approximation of the Fisher Information Matrix that can be used in natural gradient optimizers. We investigate here a scalable K-FAC design and its applicability in convolutional neural network (CNN) training at scale. We study optimization techniques such as layer-wise distribution strategies, inverse-free second-order gradient evaluation, and dynamic K-FAC update decoupling to reduce training time while preserving convergence. We use residual neural networks (ResNet) applied to the CIFAR-10 and ImageNet-1k datasets to evaluate the correctness and scalability of our K-FAC gradient preconditioner. With ResNet-50 on the ImageNet-1k dataset, our distributed K-FAC implementation converges to the 75.9% MLPerf baseline in 18–25% less time than does the classic stochastic gradient descent (SGD) optimizer across scales on a GPU cluster.




Back to Technical Papers Archive Listing