klotz: gguf* + llama.cpp*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. A user is seeking advice on deploying a new server with 4x H100 GPUs (320GB VRAM) for on-premise AI workloads. They are considering a Kubernetes-based deployment with RKE2, Nvidia GPU Operator, and tools like vLLM, llama.cpp, and Litellm. They are also exploring the option of GPU pass-through with a hypervisor. The post details their current infrastructure and asks for potential gotchas or best practices.
  2. A step-by-step guide on building llamafiles from Llama 3.2 GGUFs, including scripting and Dockerization.
  3. - create a custom base image for a Cloud Workstation environment using a Dockerfile
    . Uses:

    Quantized models from
  4. A deep dive into model quantization with GGUF and llama.cpp and model evaluation with LlamaIndex

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: gguf + llama.cpp

About - Propulsed by SemanticScuttle