Tags: github* + fine tuning* + llm* + lora*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. A light-weight codebase that enables memory-efficient and performant finetuning of Mistral's models. It is based on LoRA, a training paradigm where most weights are frozen and only 1-2% additional weights in the form of low-rank matrix perturbations are trained.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "github+fine tuning+llm+lora"

About - Propulsed by SemanticScuttle