Tags: coding* + llm*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. A Reddit thread discussing preferred local Large Language Model (LLM) setups for tasks like summarizing text, coding, and general use. Users share their model choices (Gemma, Qwen, Phi, etc.) and frameworks (llama.cpp, Ollama, EXUI) along with potential issues and configurations.

    Model Use Cases Size (Parameters) Approx. VRAM (Q4 Quantization) Approx. RAM (Q4) Notes/Requirements
    Gemma 3 (Meta) Summarization, conversational tasks, image recognition, translation, simple writing 3B, 4B, 7B, 8B, 12B, 27B+ 2-4GB (3B), 4-6GB (7B), 8-12GB (12B) 4-8GB (3B), 8-12GB (7B), 16-24GB (12B) Excellent performance for its size. Recent versions have had memory leak issues (see Reddit post – use Ollama 0.6.6 or later, but even that may not be fully fixed). QAT versions are highly recommended.
    Qwen 2.5 (Alibaba) Summarization, coding, reasoning, decision-making, technical material processing 3.5B, 7B, 72B 2-3GB (3.5B), 4-6GB (7B), 26-30GB (72B) 4-6GB (3.5B), 8-12GB (7B), 50-60GB (72B) Qwen models are known for strong performance. Coder versions specifically tuned for code generation.
    Qwen3 (Alibaba - upcoming) General purpose, likely similar to Qwen 2.5 with improvements 70B Estimated 25-30GB (Q4) 50-60GB Expected to be a strong competitor.
    Llama 3 (Meta) General purpose, conversation, writing, coding, reasoning 8B, 13B, 70B+ 4-6GB (8B), 7-9GB (13B), 25-30GB (70B) 8-12GB (8B), 14-18GB (13B), 50-60GB (70B) Current state-of-the-art open-source model. Excellent balance of performance and size.
    YiXin (01.AI) Reasoning, brainstorming 72B ~26-30GB (Q4) ~50-60GB A powerful model focused on reasoning and understanding. Similar VRAM requirements to Qwen 72B.
    Phi-4 (Microsoft) General purpose, writing, coding 14B ~7-9GB (Q4) 14-18GB Smaller model, good for resource-constrained environments, but may not match larger models in complexity.
    Ling-Lite RAG (Retrieval-Augmented Generation), fast processing, text extraction Variable Varies with size Varies with size MoE (Mixture of Experts) model known for speed. Good for RAG applications where quick responses are important.

    Key Considerations:

    • Quantization: The VRAM and RAM estimates above are based on 4-bit quantization (Q4). Lower quantization (e.g., Q2) will reduce memory usage further, but may impact quality. Higher quantization (e.g., Q8, FP16) will increase quality but require significantly more memory.
    • Frameworks: Popular frameworks for running these models locally include:
      • llama.cpp: Highly optimized for CPU and GPU, especially on Apple Silicon.
      • Ollama: Simplified setup and management of LLMs. (Be aware of the Gemma 3 memory leak issue!)
      • Text Generation WebUI (oobabooga): Web-based interface with many features and customization options.
    • Hardware: A dedicated GPU with sufficient VRAM is highly recommended for decent performance. CPU-only inference is possible but can be slow. More RAM is generally better, even if the model fits in VRAM.
    • Context Length: The "40k" context mentioned in the Reddit post refers to the maximum number of tokens (words or sub-words) the model can process at once. Longer context lengths require more memory.
  2. Details the development and release of DeepCoder-14B-Preview, a 14B parameter code reasoning model achieving performance comparable to o3-mini through reinforcement learning, along with the dataset, code, and system optimizations used in its creation.

  3. "OpenHands LM is built on the foundation of Qwen Coder 2.5 Instruct 32B, leveraging its powerful base capabilities for coding tasks."

    2025-04-02 Tags: , , , , , by klotz
  4. SuperCoder is a coding agent that runs in your terminal, offering features like code search, project structure exploration, code editing, bug fixing, and integration with OpenAI or local models.

    2025-03-31 Tags: , , , , , , by klotz
  5. Simon Willison discusses his experience using Large Language Models (LLMs) for coding, providing detailed advice on how to effectively use LLMs to augment coding abilities, set reasonable expectations, manage context, and more.

  6. An experiment in agentic AI development, where AI tools were tasked with building and maintaining a full-service product, ObjectiveScope, without direct human code modifications. The process highlighted the challenges and constraints of AI-driven development, such as deteriorating context management, technical limitations, and the need for precise prompt engineering.

    2025-02-21 Tags: , , , by klotz
  7. Dolphin 3.0 R1 is an instruct-tuned model designed for general-purpose reasoning, coding, math, and function calling. It is designed to be a local model that businesses can control, including setting the system prompt and alignment.

  8. 01.AI's Yi-Coder, an open-source AI coding assistant

    Key Features:

    • 9B and 1.5B parameter versions
    • 52 programming languages
    • context length of 128,000 tokens
    • code editing, completion, debugging, and mathematical reasoning
    2024-09-07 Tags: , , , , by klotz
  9. Cody is an AI coding assistant that uses advanced search and codebase context to help you understand, write, and fix code faster. It supports autocomplete, code generation, and explanation in various IDEs and code hosts.

  10. Unblocked is an AI tool that augments code with knowledge from systems like GitHub, Slack, Confluence, and Jira to provide quick, accurate answers about your application.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "coding+llm"

About - Propulsed by SemanticScuttle