Tags: lora*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. MeshCom is an open-source project for exchanging text messages, positions, and data using low-power, low-cost LoRa radio modules. It aims to create off-grid, mesh network communication systems.
  2. This article provides a comprehensive guide on fine-tuning the Llama 3.1 language model using Unsloth for efficient parameter-efficient training. It covers concepts like supervised fine-tuning, LoRA, QLoRA, and practical steps for training on a high-quality dataset.
  3. A light-weight codebase that enables memory-efficient and performant finetuning of Mistral's models. It is based on LoRA, a training paradigm where most weights are frozen and only 1-2% additional weights in the form of low-rank matrix perturbations are trained.
  4. Explore the potential of the open-source Meshtastic protocol and mesh radio technology for long-range IoT applications. Learn how to build and test your own projects with the RAKwireless Meshtastic development board and HelTXT handheld communicators.
  5. "The paper introduces a technique called LoReFT (Low-rank Linear Subspace ReFT). Similar to LoRA (Low Rank Adaptation), it uses low-rank approximations to intervene on hidden representations. It shows that linear subspaces contain rich semantics that can be manipulated to steer model behaviors."
  6. This paper proposes a new method called MoRA for parameter-efficient fine-tuning of large language models (LLMs). The proposed method, MoRA, employs a square matrix to achieve high-rank updating, maintaining the same number of trainable parameters. The paper suggests that low-rank updating, as implemented in LoRA, may limit the ability of LLMs to effectively learn and memorize new knowledge. MoRA outperforms LoRA on memory-intensive tasks and achieves comparable performance on other tasks.
  7. This article announces a comprehensive course on fine-tuning large language models (LLMs) offered on the freeCodeCamp.org YouTube channel. The course, developed by Krish Naik, covers topics such as QLORA, LORA, quantization with LLama2, gradient, and Google Gemma Model, among others. The course aims to help learners deepen their understanding of machine learning and artificial intelligence.
  8. - 14 free colab notebooks providing hands-on experience in fine-tuning large language models (LLMs).
    - The notebooks cover topics from efficient training methodologies like LoRA and Hugging Face to specialized models such as Llama, Guanaco, and Falcon.
    - They also include advanced techniques like PEFT Finetune, Bloom-560m-tagger, and Meta_OPT-6–1b_Model.
  9. efficient method for fine-tuning LLM using LoRA and QLoRA, making it possible to train them even on consumer hardware
  10. 2023-12-25 Tags: , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "lora"

About - Propulsed by SemanticScuttle