SC20 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Term Quantization: Furthering Quantization at Run Time


Authors: HT Kung (Harvard University), Bradley McDanel (Franklin and Marshall College), and Sai Qian Zhang (Harvard University)

Abstract: We present a novel technique, called Term Quantization (TQ), for furthering quantization at run time for improved computational efficiency of deep neural networks (DNNs) already quantized with conventional quantization methods. TQ operates on power-of-two terms in expressions of values. In computing a dot-product, TQ dynamically selects a fixed number of largest terms to use from values of the two vectors. By exploiting weight and data distributions typically present in DNNs, TQ has a minimal impact on DNN model performance (e.g., accuracy or perplexity). We use TQ to facilitate tightly synchronized processor arrays, such as systolic arrays, for efficient parallel processing. We evaluate TQ on an MLP for MNIST, multiple CNNs for ImageNet and an LSTM for Wikitext-2. We demonstrate significant reductions in inference computation costs (between 3x and 10x) compared to conventional uniform quantization for the same level of model performance.




Back to Technical Papers Archive Listing