klotz: attention*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This article explores strategies for effectively curating and managing the context that powers AI agents, discussing the shift from prompt engineering to context engineering and techniques for optimizing context usage in LLMs.
  2. An in-depth look at the architecture of OpenAI's GPT-OSS models, detailing tokenization, embeddings, transformer blocks, Mixture of Experts, attention mechanisms (GQA and RoPE), and quantization techniques.
  3. This blog post explains the causes of nondeterminism in LLM inference, arguing that it's not simply due to floating-point non-associativity and concurrency, but rather a lack of batch invariance in kernels. It details how to achieve batch invariance in RMSNorm, matrix multiplication, and attention, and presents experimental results demonstrating deterministic completions and the benefits for on-policy RL.
  4. This paper investigates how large language models (LLMs) solve mental math problems. It proposes that meaningful computation occurs late in the network (in terms of layer depth) and primarily at the last token, receiving information from other tokens in specific middle layers. The authors introduce techniques (CAMA and ABP) to identify an 'All-for-One' subgraph responsible for this behavior, demonstrating its sufficiency and necessity for high performance across various models and input styles.
  5. A detailed comparison of the architectures of recent large language models (LLMs) including DeepSeek-V3, OLMo 2, Gemma 3, Mistral Small 3.1, Llama 4, Qwen3, SmolLM3, and Kimi 2, focusing on key design choices and their impact on performance and efficiency.
  6. This article demonstrates how to use the attention mechanism in a time series classification framework, specifically for classifying normal sine waves versus 'modified' (flattened) sine waves. It details the data generation, model implementation (using a bidirectional LSTM with attention), and results, achieving high accuracy.
  7. >"I cover briefly what SVD is, and why it is central to making attention work really well."
    2025-05-19 Tags: , , , , , by klotz
  8. This article details the release of Gemma 3, the latest iteration of Google’s open-weights language model. Key improvements include **vision-language capabilities** (using a tailored SigLIP encoder), **increased context length** (up to 128k tokens for larger models), and **architectural changes for improved memory efficiency** (5-to-1 interleaved attention and removal of softcapping). Gemma 3 demonstrates superior performance compared to Gemma 2 across benchmarks and offers models optimized for various use cases, including on-device applications with the 1B model.
    2025-05-01 Tags: , , , , by klotz
  9. This article provides a beginner-friendly explanation of attention mechanisms and transformer models, covering sequence-to-sequence modeling, the limitations of RNNs, the concept of attention, and how transformers address these limitations with self-attention and parallelization.
  10. The attention mechanism in Large Language Models (LLMs) helps derive the meaning of a word from its context. This involves encoding words as multi-dimensional vectors, calculating query and key vectors, and using attention weights to adjust the embedding based on contextual relevance.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: attention

About - Propulsed by SemanticScuttle