0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-70B-Instruct.
The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below).
First / Previous / Next / Last / Page 1 of 0