Tags: ollama* + llm*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. Docker is making it easier for developers to run and test AI Large Language Models (LLMs) on their PCs with the launch of Docker Model Runner, a new beta feature in Docker Desktop 4.40 for Apple silicon-powered Macs. It also integrates the Model Context Protocol (MCP) for streamlined connections between AI agents and data sources.

  2. A Reddit thread discussing preferred local Large Language Model (LLM) setups for tasks like summarizing text, coding, and general use. Users share their model choices (Gemma, Qwen, Phi, etc.) and frameworks (llama.cpp, Ollama, EXUI) along with potential issues and configurations.

    Model Use Cases Size (Parameters) Approx. VRAM (Q4 Quantization) Approx. RAM (Q4) Notes/Requirements
    Gemma 3 (Meta) Summarization, conversational tasks, image recognition, translation, simple writing 3B, 4B, 7B, 8B, 12B, 27B+ 2-4GB (3B), 4-6GB (7B), 8-12GB (12B) 4-8GB (3B), 8-12GB (7B), 16-24GB (12B) Excellent performance for its size. Recent versions have had memory leak issues (see Reddit post – use Ollama 0.6.6 or later, but even that may not be fully fixed). QAT versions are highly recommended.
    Qwen 2.5 (Alibaba) Summarization, coding, reasoning, decision-making, technical material processing 3.5B, 7B, 72B 2-3GB (3.5B), 4-6GB (7B), 26-30GB (72B) 4-6GB (3.5B), 8-12GB (7B), 50-60GB (72B) Qwen models are known for strong performance. Coder versions specifically tuned for code generation.
    Qwen3 (Alibaba - upcoming) General purpose, likely similar to Qwen 2.5 with improvements 70B Estimated 25-30GB (Q4) 50-60GB Expected to be a strong competitor.
    Llama 3 (Meta) General purpose, conversation, writing, coding, reasoning 8B, 13B, 70B+ 4-6GB (8B), 7-9GB (13B), 25-30GB (70B) 8-12GB (8B), 14-18GB (13B), 50-60GB (70B) Current state-of-the-art open-source model. Excellent balance of performance and size.
    YiXin (01.AI) Reasoning, brainstorming 72B ~26-30GB (Q4) ~50-60GB A powerful model focused on reasoning and understanding. Similar VRAM requirements to Qwen 72B.
    Phi-4 (Microsoft) General purpose, writing, coding 14B ~7-9GB (Q4) 14-18GB Smaller model, good for resource-constrained environments, but may not match larger models in complexity.
    Ling-Lite RAG (Retrieval-Augmented Generation), fast processing, text extraction Variable Varies with size Varies with size MoE (Mixture of Experts) model known for speed. Good for RAG applications where quick responses are important.

    Key Considerations:

    • Quantization: The VRAM and RAM estimates above are based on 4-bit quantization (Q4). Lower quantization (e.g., Q2) will reduce memory usage further, but may impact quality. Higher quantization (e.g., Q8, FP16) will increase quality but require significantly more memory.
    • Frameworks: Popular frameworks for running these models locally include:
      • llama.cpp: Highly optimized for CPU and GPU, especially on Apple Silicon.
      • Ollama: Simplified setup and management of LLMs. (Be aware of the Gemma 3 memory leak issue!)
      • Text Generation WebUI (oobabooga): Web-based interface with many features and customization options.
    • Hardware: A dedicated GPU with sufficient VRAM is highly recommended for decent performance. CPU-only inference is possible but can be slow. More RAM is generally better, even if the model fits in VRAM.
    • Context Length: The "40k" context mentioned in the Reddit post refers to the maximum number of tokens (words or sub-words) the model can process at once. Longer context lengths require more memory.
  3. This article details a method for converting PDFs to Markdown using a local LLM (Gemma 3 via Ollama), focusing on privacy and efficiency. It involves rendering PDF pages as images and then using the LLM for content extraction, even from scanned PDFs.

    2025-04-16 Tags: , , , , , , , , by klotz
  4. This document details how to run and fine-tune Gemma 3 models (1B, 4B, 12B, and 27B) using Unsloth, covering setup with Ollama and llama.cpp, and addressing potential float16 precision issues. It also highlights Unsloth's unique ability to run Gemma 3 in float16 on machines like Colab notebooks with Tesla T4 GPUs.

  5. This article guides readers through building an OCR application using the Llama 3.2-Vision model from Ollama, using Python as the programming language. It includes steps for setting up the environment, installing necessary tools, and writing the OCR script.

    2024-11-21 Tags: , , , by klotz
  6. A tutorial to set up a local, open-source virtual assistant and a code assist feature similar to Copilot using Ollama, Llama3, Continue, and Open WebUI.

  7. A comparison of frameworks, models, and costs for deploying Llama models locally and privately.

    • Four tools were analyzed: HuggingFace, vLLM, Ollama, and llama.cpp.
    • HuggingFace has a wide range of models but struggles with quantized models.
    • vLLM is experimental and lacks full support for quantized models.
    • Ollama is user-friendly but has some customization limitations.
    • llama.cpp is preferred for its performance and customization options.
    • The analysis focused on llama.cpp and Ollama, comparing speed and power consumption across different quantizations.
    2024-11-03 Tags: , , , , , by klotz
  8. Ollama now supports HuggingFace GGUF models, making it easier for users to run AI models locally without internet. The GGUF format allows for the use of AI models on modest-sized consumer hardware.

    2024-10-24 Tags: , , , , by klotz
  9. NuExtract is a 3.8B parameter information extraction model fine-tuned from phi-3, designed to extract structured data from text using a JSON template.

  10. A step-by-step guide to run Llama3 locally with Python. Discusses the benefits of running local LLMs, including data privacy, cost-effectiveness, customization, offline functionality, and unrestricted use.

    2024-07-12 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "ollama+llm"

About - Propulsed by SemanticScuttle