This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-70B-Instruct.
The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below).
hat - chat directly, character card is your prompt
instruct- chat between "you" and "assistant" using the model's prompt format
chat-instruct- chat with you and a character card as a prompt but with the instruct template applied. .i.e "you are an AI playing x character, respond as the character would" converted to alpaca, wizard or whatever
There is no best, but for factual information, you probably want to keep to instruct mode. instruct-chat doesn't necessarily play the characters better or make them write longer. It's sort of hit or miss. one may work better than the other for a particular model and prompt.
Models referred to as "GPT 3.5"
GPT-3.5 series is a series of models that was trained on a blend of text and code from before Q4 2021. The following models are in the GPT-3.5 series:
code-davinci-002 is a base model, so good for pure code-completion tasks
text-davinci-002 is an InstructGPT model based on code-davinci-002
text-davinci-003 is an improvement on text-davinci-002
gpt-3.5-turbo-0301 is an improvement on text-davinci-003, optimized for chat