This paper presents a method to accelerate the grokking phenomenon, where a model's generalization improves with more training iterations after an initial overfitting stage. The authors propose a simple algorithmic modification to existing optimizers that filters out the fast-varying components of the gradients and amplifies the slow-varying components, thereby accelerating the grokking effect.
Training PRO extension for oobabooga WebUI - recent dev version. Key features and changes from the main Training in WebUI include:
- Chunking: precise raw text slicer (PRTS) uses sentence splitting and making sure things are clean on all ends
- Overlapping chunking: this special overlapping will make additional overlap block based on logical rules
- Custom scheduler: FP_low_epoch_annealing keeps the LR constant for the first epoch and uses cosine for the rest
- Target selector: Normal LORA is q, v, and it should be used with (q k v o) or (q k v)
- DEMENTOR LEARNING (experimental) is an experimental chunking to train long-form text in low numbers of epochs
This article discusses the process of training a large language model (LLM) using reinforcement learning from human feedback (RLHF) and a new alternative method called Direct Preference Optimization (DPO). The article explains how these methods help align the LLM with human expectations and make it more efficient.
Delving into transformer networks
This repository is a curated collection of links to various courses and resources about Artificial Intelligence (AI)