Abstract

GPU architectures have continued to grow in complexity, with recent incarnations introducing increasingly powerful fixed-function units for matrix multiplication and data move- ment to accompany highly parallel general-purpose cores. To fully leverage these machines, software must use sophisti- cated schedules that maximally utilize all hardware resources. Since realizing such schedules is complex, both programmers and compilers routinely employ program transformations, such as software pipelining (SWP) and warp specialization (WS), to do so in practice. However, determining how best to use SWP and WS in combination is a challenging problem that is currently handled through a mix of brittle compilation heuristics and fallible human intuition, with little insight into the space of solutions. To remedy this situation, we introduce a novel formulation of SWP and WS as a joint optimization problem that can be solved holistically by off-the-shelf con- straint solvers. We reify our approach in Twill, the first system that automatically derives optimal SWP and WS schedules for a large class of iterative programs. Twill is heuristic-free, easily extensible to new GPU architectures, and guaranteed to produce optimal schedules. We show that Twill can redis- cover, and thereby prove optimal, the SWP and WS schedules manually developed by experts for Flash Attention on both the NVIDIA Hopper and Blackwell GPU architectures.

Article

pdf

BibTeX

@article{rupanshusoi2026,
  title={Optimal Software Pipelining and Warp Specialization for Tensor Core GPUs},
  author={Rupanshu Soi and Rohan Yadav and Fredrik Kjolstad and Alex Aiken and Maryam Dehnavi and Michael Garland and Michael Bauer},
  journal={Proceedings of the Symposium on Operating System Design and Implementation},
  year={2026},
  month={July}
}