Abstract
>
Optimizing deep learning algorithms currently requires slow, manual derivation, potentially
leaving much performance untapped. Methods like FlashAttention have achieved a ×6
performance improvement over native PyTorch by avoiding unnecessary data transfers, but
required three iterations over three years to be developed. Automated compiled methods
have consistently lagged behind. This paper extends Neural Circuit Diagrams for deep
learning models to consider resource usage and the distribution of tasks across a GPU
hierarchy. We show how diagrams can use simple relabellings to derive high-level streaming
and tiling optimization strategies along with performance models. We show how this high-
level performance model allows the effects of quantization and multi-level GPU hierarchies
to be readily considered. We develop a methodology for representing intermediate-level
pseudocode with diagrams, allowing hardware-aware algorithms to be derived step-by-step.
Finally, we show how our methodology can be used to better understand existing techniques
like FlashAttention. This work uses a theoretical framework to link assumptions about
GPU behaviour to claims about performance. We aim to lay the groundwork for a scientific
approach to GPU optimization where experiments can address clear hypotheses rather than
post-hoc rationalizations.