0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
The paper titled "Attention Is All You Need" introduces the Transformer, a novel architecture for sequence transduction models that relies entirely on self-attention mechanisms, dispensing with traditional recurrence and convolutions. Key aspects of the model include:
The paper emphasizes the efficiency and scalability of the Transformer, highlighting its potential for various sequence transduction tasks, and provides a foundation for subsequent advancements in natural language processing and beyond.
This article is part of a series titled ‘LLMs from Scratch’, a complete guide to understanding and building Large Language Models (LLMs). In this article, we discuss the self-attention mechanism and how it is used by transformers to create rich and context-aware transformer embeddings.
The Self-Attention mechanism is used to add context to learned embeddings, which are vectors representing each word in the input sequence. The process involves the following steps:
Learned Embeddings: These are the initial vector representations of words, learned during the training phase. The weights matrix, storing the learned embeddings, is stored in the first linear layer of the Transformer architecture.
Positional Encoding: This step adds positional information to the learned embeddings. Positional information helps the model understand the order of the words in the input sequence, as transformers process all words in parallel, and without this information, they would lose the order of the words.
Self-Attention: The core of the Self-Attention mechanism is to update the learned embeddings with context from the surrounding words in the input sequence. This mechanism determines which words provide context to other words, and this contextual information is used to produce the final contextualized embeddings.
First / Previous / Next / Last
/ Page 1 of 0