0 bookmark(s) - Sort by: Date ↓ / Title /
Ollama now supports HuggingFace GGUF models, making it easier for users to run AI models locally without internet. The GGUF format allows for the use of AI models on modest-sized consumer hardware.
Quantized models from
A deep dive into model quantization with GGUF and llama.cpp and model evaluation with LlamaIndex
First / Previous / Next / Last / Page 1 of 0