klotz: quantization*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This article explores the concept of quantization in large language models (LLMs) and its benefits, including reducing memory usage and improving performance. It also discusses various quantization methods and their effects on model quality.
    2024-07-14 Tags: , , , by klotz
  2. An explanation of the quant names used in the llama.cpp implementation, as well as information on the different types of quant schemes available.
    2024-06-23 Tags: , , by klotz
  3. This document contains the quantized LLM inference performance results on 70b+ models.
    2024-06-23 Tags: , , , by klotz
  4. This article announces a comprehensive course on fine-tuning large language models (LLMs) offered on the freeCodeCamp.org YouTube channel. The course, developed by Krish Naik, covers topics such as QLORA, LORA, quantization with LLama2, gradient, and Google Gemma Model, among others. The course aims to help learners deepen their understanding of machine learning and artificial intelligence.
  5. HQQ is a fast and accurate model quantizer that skips the need for calibration data. It's super simple to implement (just a few lines of code for the optimizer). It can crunch through quantizing the Llama2-70B model in only 4 minutes!
    2024-02-24 Tags: , , by klotz
  6. Not Mixtral MoE but Merge-kit MoE

    EveryoneLLM series of models are a new Mixtral type model created using experts that were finetuned by the community, for the community. This is the first model to release in the series and it is a coding specific model. EveryoneLLM, which will be a more generalized model, will be released in the near future after more work is done to fine tune the process of merging Mistral models into a larger Mixtral models with greater success.

    The goal of the EveryoneLLM series of models is to be a replacement or an alternative to Mixtral-8x7b that is more suitable for general and specific use, as well as easier to fine tune. Since Mistralai is being secretive about the "secret sause" that makes Mixtral-Instruct such an effective fine tune of the Mixtral-base model, I've decided its time for the community to directly compete with Mistralai on our own.
  7. Not Mixtral MoE but Merge-kit MoE

    - What makes a perfect MoE: The secret formula
    - Why is a proper merge considered a base model, and how do we distinguish them from a FrankenMoE?
    - Why the community working together to improve as a whole is the only way we will get Mixtral right
  8. Improving the memory and computational efficiency of Large Language Models (LLMs) for handling long input sequences, including retrieval augmented questions answering, summarization, and chat tasks. It covers various techniques, such as lower precision computing, Flash Attention algorithm, positional embedding methods, and key-value caching strategies. These methods help reduce memory consumption and increase inference speeds while maintaining high accuracy levels in LLM applications. Furthermore, it highlights some advanced approaches like Multi-Query-Attention (MQA) and Grouped-Query-Attention (GQA), which further enhance computational and memory efficiency without compromising performance.
  9. 2024-01-29 Tags: , , , , by klotz
  10. 2024-01-28 Tags: , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: quantization

About - Propulsed by SemanticScuttle