klotz: laboratory for information and decision systems*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. Abstract
    >
    Optimizing deep learning algorithms currently requires slow, manual derivation, potentially
    leaving much performance untapped. Methods like FlashAttention have achieved a ×6
    performance improvement over native PyTorch by avoiding unnecessary data transfers, but
    required three iterations over three years to be developed. Automated compiled methods
    have consistently lagged behind. This paper extends Neural Circuit Diagrams for deep
    learning models to consider resource usage and the distribution of tasks across a GPU
    hierarchy. We show how diagrams can use simple relabellings to derive high-level streaming
    and tiling optimization strategies along with performance models. We show how this high-
    level performance model allows the effects of quantization and multi-level GPU hierarchies
    to be readily considered. We develop a methodology for representing intermediate-level
    pseudocode with diagrams, allowing hardware-aware algorithms to be derived step-by-step.
    Finally, we show how our methodology can be used to better understand existing techniques
    like FlashAttention. This work uses a theoretical framework to link assumptions about
    GPU behaviour to claims about performance. We aim to lay the groundwork for a scientific
    approach to GPU optimization where experiments can address clear hypotheses rather than
    post-hoc rationalizations.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: laboratory for information and decision systems

About - Propulsed by SemanticScuttle