Tags: llm* + llama.cpp* + gguf*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. A detailed guide for running the new gpt-oss models locally with the best performance using `llama.cpp`. The guide covers a wide range of hardware configurations and provides CLI argument explanations and benchmarks for Apple Silicon devices.
  2. How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth! This page details running Gemma 3 on various platforms, including phones, and fine-tuning it using Unsloth, addressing potential issues with float16 precision and providing optimal configuration settings.
  3. A step-by-step guide on building llamafiles from Llama 3.2 GGUFs, including scripting and Dockerization.
  4. - create a custom base image for a Cloud Workstation environment using a Dockerfile
    . Uses:

    Quantized models from
  5. A deep dive into model quantization with GGUF and llama.cpp and model evaluation with LlamaIndex

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "llm+llama.cpp+gguf"

About - Propulsed by SemanticScuttle