Tags: llama 2*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. This article explores how to boost the performance of small language models by using supervision from larger ones through knowledge distillation. The article provides a step-by-step guide on how to distill knowledge from a teacher model (LLama 2–70B) to a student model (Tiny-LLama) using unlabeled in-domain data and targeted prompting.
  2. Tune a base LLama2 LLM to output SQL code. with Parameter Efficient Fine-Tuning techniques to optimise the process.
    2023-12-17 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "llama 2"

About - Propulsed by SemanticScuttle