klotz: training*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This paper presents a method to accelerate the grokking phenomenon, where a model's generalization improves with more training iterations after an initial overfitting stage. The authors propose a simple algorithmic modification to existing optimizers that filters out the fast-varying components of the gradients and amplifies the slow-varying components, thereby accelerating the grokking effect.
  2. Training PRO extension for oobabooga WebUI - recent dev version. Key features and changes from the main Training in WebUI include:
    - Chunking: precise raw text slicer (PRTS) uses sentence splitting and making sure things are clean on all ends
    - Overlapping chunking: this special overlapping will make additional overlap block based on logical rules
    - Custom scheduler: FP_low_epoch_annealing keeps the LR constant for the first epoch and uses cosine for the rest
    - Target selector: Normal LORA is q, v, and it should be used with (q k v o) or (q k v)
    - DEMENTOR LEARNING (experimental) is an experimental chunking to train long-form text in low numbers of epochs
    2024-06-29 Tags: , , , , by klotz
  3. This article discusses the process of training a large language model (LLM) using reinforcement learning from human feedback (RLHF) and a new alternative method called Direct Preference Optimization (DPO). The article explains how these methods help align the LLM with human expectations and make it more efficient.
  4. 2024-05-06 Tags: , , by klotz
  5. 2024-05-04 Tags: , , by klotz
  6. 2023-11-26 Tags: , , by klotz
  7. Delving into transformer networks
  8. This repository is a curated collection of links to various courses and resources about Artificial Intelligence (AI)

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: training

About - Propulsed by SemanticScuttle