klotz: mistral*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. 2024-02-22 Tags: , , , by klotz
  2. Not Mixtral MoE but Merge-kit MoE

    EveryoneLLM series of models are a new Mixtral type model created using experts that were finetuned by the community, for the community. This is the first model to release in the series and it is a coding specific model. EveryoneLLM, which will be a more generalized model, will be released in the near future after more work is done to fine tune the process of merging Mistral models into a larger Mixtral models with greater success.

    The goal of the EveryoneLLM series of models is to be a replacement or an alternative to Mixtral-8x7b that is more suitable for general and specific use, as well as easier to fine tune. Since Mistralai is being secretive about the "secret sause" that makes Mixtral-Instruct such an effective fine tune of the Mixtral-base model, I've decided its time for the community to directly compete with Mistralai on our own.
  3. Not Mixtral MoE but Merge-kit MoE

    - What makes a perfect MoE: The secret formula
    - Why is a proper merge considered a base model, and how do we distinguish them from a FrankenMoE?
    - Why the community working together to improve as a whole is the only way we will get Mixtral right
  4. novel concepts that Mistral AI added to traditional Transformer architectures and we perform a comparison of inference time between Mistral 7B and Llama 2 7B and a comparison of memory, inference time and response quality between Mixtral 8x7B and LLama 2 70B. RAG systems and a public Amazon dataset with customer reviews.
    2024-01-23 Tags: , , , , , by klotz
  5. Get models like Phi-2, Mistral, and LLaVA running locally on a Raspberry Pi with Ollama
    2024-01-14 Tags: , , , , , by klotz
  6. Mixtral 8x7B:
    Use llm-llama-cpp plugin.
    Download a GGUF file for Mixtral 8X7B Instruct v0.1.
    Run the model using llm -m gguf with the downloaded file.
    2024-10-29 Tags: , , , , , by klotz
  7. deploy and run LLM (large language models), including LLaMA, LLaMA2, Phi-2, Mixtral-MOE, and mamba-gpt, on the Raspberry Pi 5 8GB.
    2024-01-10 Tags: , , , , , , by klotz
  8. Boost the performance of your supervised fine-tuned models
    2024-01-02 Tags: , , by klotz
  9. 2023-12-21 Tags: , , , , by klotz
  10. 2023-10-09 Tags: , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 2 of 0 SemanticScuttle - klotz.me: Tags: mistral

About - Propulsed by SemanticScuttle