We address the problem of optimizing sparse tensor algebra in a compiler and show how to define standard loop transformations—split, collapse, and reorder—on sparse iteration spaces. The key idea is to track the transformation functions that map the original iteration space to derived iteration spaces. These functions are needed by the code generator to emit code that maps coordinates between iteration spaces at runtime, since the coordinates in the sparse data structures remain in the original iteration space. The result is a new iteration order that still iterates over only the subset of nonzero coordinates defined by the data structures. We further demonstrate that derived iteration spaces can tile both the universe of coordinates and the subset of nonzero coordinates: the former is analogous to tiling dense iteration spaces, while the latter tiles sparse iteration spaces into statically load-balanced blocks of nonzeros. Tiling the space of nonzeros lets the generated code efficiently exploit heterogeneous compute resources such as threads, vector units, and GPUs.

We implement these concepts and an associated scheduling API in the open-source TACO system. The scheduling API can be used by performance engineers or it can be the target of an automatic scheduling system. We outline one heuristic autoscheduling system, but many other systems are both possible and enabled by our API. Using the scheduling API, we show how to optimize sparse and mixed sparse-dense tensor algebra expressions on both CPUs and GPUs. Our results show that the sparse transformations are sufficient to generate code with competitive performance to hand-optimized implementations from the literature, while generalizing to all of the tensor algebra.





  title={A Sparse Iteration Space Transformation Framework for Sparse Tensor Algebra},
  author={Ryan Senanayake and Changwan Hong and Ziheng Wang and Amalee Wilson and Stephen Chou and Shoaib Kamil and Saman Amarasinghe and Fredrik Kjolstad},
  journal={Proceedings of the ACM on Programming Languages},