Tags: unsloth* + llama.cpp*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. This document details how to run Google's Gemma 4 models locally, including the E2B, E4B, 26B-A4B, and 31B variants. Gemma 4 is a family of open models supporting over 140 languages and up to 256K context, available in both dense and MoE configurations. The E2B and E4B models support image and audio input. These models can be run locally on your device and fine-tuned using Unsloth Studio. The document outlines hardware requirements, recommended settings, and best practices for prompting and multimodal use, including guidance on context length and thinking mode.
  2. This guide explains how to use tool calling with local LLMs, including examples with mathematical, story, Python code, and terminal functions, using llama.cpp, llama-server, and OpenAI endpoints.
  3. How to run Gemma 3 effectively with our GGUFs on llama.cpp, Ollama, Open WebUI and how to fine-tune with Unsloth! This page details running Gemma 3 on various platforms, including phones, and fine-tuning it using Unsloth, addressing potential issues with float16 precision and providing optimal configuration settings.
  4. Learn how to run and fine-tune Mistral Devstral 1.1, including Small-2507 and 2505. This guide covers official recommended settings, tutorials for running Devstral in Ollama and llama.cpp, experimental vision support, and fine-tuning with Unsloth.
  5. This document details how to run and fine-tune Gemma 3 models (1B, 4B, 12B, and 27B) using Unsloth, covering setup with Ollama and llama.cpp, and addressing potential float16 precision issues. It also highlights Unsloth's unique ability to run Gemma 3 in float16 on machines like Colab notebooks with Tesla T4 GPUs.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "unsloth+llama.cpp"

About - Propulsed by SemanticScuttle