A detailed guide for running the new gpt-oss models locally with the best performance using `llama.cpp`. The guide covers a wide range of hardware configurations and provides CLI argument explanations and benchmarks for Apple Silicon devices.
How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth! This page details running Gemma 3 on various platforms, including phones, and fine-tuning it using Unsloth, addressing potential issues with float16 precision and providing optimal configuration settings.
A step-by-step guide on building llamafiles from Llama 3.2 GGUFs, including scripting and Dockerization.
- create a custom base image for a Cloud Workstation environment using a Dockerfile
. Uses:
Quantized models from
A deep dive into model quantization with GGUF and llama.cpp and model evaluation with LlamaIndex