Tags: llm* + lora*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. A look at this year’s crop of LoRA alternatives, including SVF, SVFT, MiLoRA, PiSSA, and LoRA-XS, all based on SVD (Singular Value Decomposition). The article compares these techniques to the original LoRA method for fine-tuning Large Language Models.

    Method Description Key Feature(s) Reference
    LoRA Freezes the model and trains a small pair of low-rank “adapter” matrices. Saves memory and compute cycles by reducing the number of trainable parameters. arxiv.org/abs/2106.09685
    SVF Uses SVD on the model’s weight matrices and fine-tunes the singular values directly. More economical in parameters than LoRA; makes tuned models composable. arxiv.org/abs/2501.06252v2
    SVFT Adds more trainable weights on the diagonal and evaluates various alternatives. Provides more trainable values than just the diagonal, useful for better fine-tuning. arxiv.org/abs/2405.19597
    PiSSA Tunes only the large principal values. Designed to approximate full fine-tuning by adapting the principal singular components. arxiv.org/abs/2404.02948
    MiLoRA Tunes only the small principal values. Retains base model’s knowledge while adapting to new tasks. arxiv.org/abs/2406.09044
    LoRA-XS Similar to PiSSA but with a slightly different mechanism. Shows good results with significantly fewer parameters than LoRA. arxiv.org/abs/2405.17604
    DoRA Splits weights into magnitudes and directions then tunes those. arxiv.org/abs/2402.09353
    AdaLoRA Complex mechanism for finding the best tuning rank for a given budget of trainable weights. arxiv.org/abs/2303.10512
    2025-03-14 Tags: , , by klotz
  2. Sergey Pletenev et al. explore the integration of new knowledge into Large Language Models (LLMs) using Low-Rank Adaptation (LoRA). The study focuses on fine-tuning the Llama-3.1-8B-instruct model with varying amounts of new information while aiming to retain previously learned knowledge. The researchers found that mixing known and new facts in training data yields the best results but also noted potential drawbacks, such as a decline in performance on external benchmarks and a bias towards overrepresented answers when the data is skewed. Additionally, the model sometimes becomes overly confident and hesitant to answer. These findings emphasize the need for careful consideration of training data composition and tuning parameters to balance the incorporation of new knowledge with maintaining overall model capabilities.

  3. This tutorial guides readers on how to fine-tune the Mistral 7B large language model using QLoRA with the Axolotl library, focusing on managing limited GPU resources for efficient training. It covers environment setup, dataset creation, configuration of QLoRA hyperparameters, the fine-tuning process, and testing the fine-tuned model.

  4. The article explores techniques to improve Large Language Model (LLM) accuracy, focusing on Lamini Memory Tuning. It discusses fine-tuning methods like Low-Rank Adaptation (LoRA), the advantages and disadvantages of fine-tuning, and practical steps using Lamini to achieve higher precision in SQL query generation. The author demonstrates a step-by-step approach to creating a high-quality dataset, fine-tuning, and evaluating model accuracy.

    2025-01-12 Tags: , , , , by klotz
  5. This article provides a comprehensive guide on fine-tuning the Llama 3.1 language model using Unsloth for efficient parameter-efficient training. It covers concepts like supervised fine-tuning, LoRA, QLoRA, and practical steps for training on a high-quality dataset.

  6. A light-weight codebase that enables memory-efficient and performant finetuning of Mistral's models. It is based on LoRA, a training paradigm where most weights are frozen and only 1-2% additional weights in the form of low-rank matrix perturbations are trained.

  7. "The paper introduces a technique called LoReFT (Low-rank Linear Subspace ReFT). Similar to LoRA (Low Rank Adaptation), it uses low-rank approximations to intervene on hidden representations. It shows that linear subspaces contain rich semantics that can be manipulated to steer model behaviors."

  8. This paper proposes a new method called MoRA for parameter-efficient fine-tuning of large language models (LLMs). The proposed method, MoRA, employs a square matrix to achieve high-rank updating, maintaining the same number of trainable parameters. The paper suggests that low-rank updating, as implemented in LoRA, may limit the ability of LLMs to effectively learn and memorize new knowledge. MoRA outperforms LoRA on memory-intensive tasks and achieves comparable performance on other tasks.

  9. This article announces a comprehensive course on fine-tuning large language models (LLMs) offered on the freeCodeCamp.org YouTube channel. The course, developed by Krish Naik, covers topics such as QLORA, LORA, quantization with LLama2, gradient, and Google Gemma Model, among others. The course aims to help learners deepen their understanding of machine learning and artificial intelligence.

    • 14 free colab notebooks providing hands-on experience in fine-tuning large language models (LLMs).
    • The notebooks cover topics from efficient training methodologies like LoRA and Hugging Face to specialized models such as Llama, Guanaco, and Falcon.
    • They also include advanced techniques like PEFT Finetune, Bloom-560m-tagger, and Meta_OPT-6–1b_Model.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "llm+lora"

About - Propulsed by SemanticScuttle