Workshop:IA^3 2020: 10th Workshop on Irregular Applications: Architectures and Algorithms
Authors: Mary Hall (University of Utah)
Abstract: Scalable computations where the data is sparse — that is, a tiny subset of the data is populated — are widely represented in scientific computing, data analytics and machine learning. Sparse data are typically represented by sparse matrices and graphs, which reduce data storage and computation requirements (e.g., by storing only nonzero data elements) through the use of indirection matrices such as B in A[B[i]]. This indirection impacts performance on modern architectures. Whether the code is optimized by human programmer, compiler or hardware, any optimizations that must understand the memory access patterns for A require the values of B, which are not known until runtime. Generating high-performance code for sparse computations is made more challenging by modern architectures, where data movement cost is dominant. Significant interest in compiler technology for sparse tensors has arisen with the intense interest in machine learning systems. In this talk, we will approach this challenge from a compiler perspective, and describe active areas of research and future requirements.