Authors: Cong Guo (Shanghai Jiao Tong University); Bo Yang Hsueh (Nvidia Corporation); Jingwen Leng (Shanghai Jiao Tong University, Shanghai Qi Zhi Institute); Yuxian Qiu and Yue Guan (Shanghai Jiao Tong University); Zehuan Wang, Xiaoying Jia, and Xipeng Li (Nvidia Corporation); Minyi Guo (Shanghai Jiao Tong University, Shanghai Qi Zhi Institute); and Yuhao Zhu (University of Rochester)
Abstract: Network pruning can reduce the high computational cost of deep neural network (DNN) models. To maintain their accuracies, however, sparse models often carry randomly-distributed weights, leading to irregular computations. Consequently, sparse models cannot achieve meaningful speedup on commodity hardware (e.g., GPU) built for dense matrix computations. As such, prior works usually modify or design completely new sparsity-optimized architectures for exploiting sparsity. We propose an algorithm-software co-designed pruning method that achieves latency speedups on existing dense architectures.
Our work builds upon the insight that the matrix multiplication generally breaks the large matrix into multiple smaller tiles for parallel execution. We propose a tiling-friendly "tile-wise'' sparsity pattern, which maintains a regular pattern at the tile level for efficient execution but allows for irregular, arbitrary pruning at the global scale to maintain high accuracy. We implement and evaluate the sparsity pattern on GPU tensorcore, achieving a 1.95x speedup over the dense model.
Back to Technical Papers Archive Listing