klotz: huggingface* + llm* + ollama*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. A comparison of frameworks, models, and costs for deploying Llama models locally and privately.

    - Four tools were analyzed: HuggingFace, vLLM, Ollama, and llama.cpp.
    - HuggingFace has a wide range of models but struggles with quantized models.
    - vLLM is experimental and lacks full support for quantized models.
    - Ollama is user-friendly but has some customization limitations.
    - llama.cpp is preferred for its performance and customization options.
    - The analysis focused on llama.cpp and Ollama, comparing speed and power consumption across different quantizations.
    2024-11-03 Tags: , , , , , by klotz
  2. Ollama now supports HuggingFace GGUF models, making it easier for users to run AI models locally without internet. The GGUF format allows for the use of AI models on modest-sized consumer hardware.
    2024-10-24 Tags: , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: huggingface + llm + ollama

About - Propulsed by SemanticScuttle