0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
This tutorial demonstrates how to fine-tune the Llama-2 7B Chat model for Python code generation using QLoRA, gradient checkpointing, and SFTTrainer with the Alpaca-14k dataset.
A comparison of frameworks, models, and costs for deploying Llama models locally and privately.
"This is one of the best 13B models I've tested. (for programming, math, logic, etc) speechless-llama2-hermes-orca-platypus-wizardlm-13b"
First / Previous / Next / Last
/ Page 1 of 0