0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
novel concepts that Mistral AI added to traditional Transformer architectures and we perform a comparison of inference time between Mistral 7B and Llama 2 7B and a comparison of memory, inference time and response quality between Mixtral 8x7B and LLama 2 70B. RAG systems and a public Amazon dataset with customer reviews.
First / Previous / Next / Last
/ Page 1 of 0