Tags: qwen*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. Small, inexpensive single-board computers like the Raspberry Pi 5 are becoming viable platforms for running local large language models (LLMs). By utilizing quantization techniques to reduce model size and memory requirements, users can run quantized versions of popular models such as Llama 3, Mistral, and Qwen. While processing speeds remain limited compared to high-end GPUs, these devices offer a private and low-cost way to implement AI for specific tasks.

    - Quantization allows large models to fit into the Pi's limited RAM by reducing numerical precision.
    - Tiny models (1B-3B parameters) run comfortably, while 7B parameter models are usable on 8GB versions with managed expectations.
    - Performance is measured in low single-digit tokens per second, making it suitable for non-real-time tasks.
    - Hardware upgrades like the Raspberry Pi AI HAT+ or external eGPUs can significantly boost neural processing capabilities.
  2. An exploration of the new Qwen3.6-27B open weight model, which claims flagship-level agentic coding performance that surpasses previous larger MoE models while being significantly smaller in size. The author tests a quantized version using llama-server and demonstrates its impressive ability to generate complex SVG graphics locally.
    Key points:
    - Qwen3.6-27B outperforms the older Qwen3.5-397B-A17B on coding benchmarks.
    - Dramatic reduction in model size from 807GB to approximately 55.6GB for the base version.
    - Successful local execution using a 16.8GB quantized GGUF version via llama.cpp.
    - High-quality SVG generation capabilities for complex prompts like a pelican riding a bicycle.
  3. Qwen3.5-27B is a powerful, multimodal language model designed for versatility and efficiency. It excels in tasks requiring reasoning, coding, and visual understanding thanks to its unified vision-language foundation and efficient architecture utilizing Gated Delta Networks and sparse Mixture-of-Experts. The model supports 201 languages and boasts a native 262,144 token context window, expandable to 1,010,000.

    **Key Specs:**

    * **Model Type:** Causal Language Model with Vision Encoder, 27 Billion Parameters
    * **Architecture:** 64 Layers, 5120 Hidden Dimension
    * **Training:** Scalable Reinforcement Learning for real-world adaptability.

    **Performance Highlights:** Qwen3.5-27B demonstrates strong performance across a broad spectrum of benchmarks, including: **Knowledge & Reasoning** (MMLU, C-Eval, HLE, GPQA), **Instruction Following & General Agent Capabilities** (IFEval, IFBench, BFCL-V4, TAU2-Bench), **Coding** (SWE-bench, CodeForces), **Long Context Handling** (AA-LCR, LongBench v2), **Vision-Language Understanding** (MMMU, RealWorldQA), and **Multilingual Abilities** (MMMLU, WMT24++).

    **Usage & Deployment:**

    The model can be served and utilized through several frameworks: **SGLang & vLLM** (for fast, high-throughput inference with features like Multi-Token Prediction), **KTransformers & Hugging Face Transformers** (offering flexibility and lightweight testing options), and a **Chat Completions API** (with OpenAI SDK examples for various input types).

    **Key Considerations:**

    * Operates in "thinking mode" by default (intermediate thought processes), which can be disabled.
    * Well-suited for agent applications, particularly with the Qwen-Agent framework.
    * Documentation provides details on API configuration and recommended sampling parameters.
    2026-03-01 Tags: , , , , , by klotz
  4. This discussion details performance benchmarks of llama.cpp on an NVIDIA DGX Spark, including tests for various models (gpt-oss-20b, gpt-oss-120b, Qwen3, Qwen2.5, Gemma, GLM) with different context depths and batch sizes.
    2025-10-15 Tags: , , , , , , , , by klotz
  5. A detailed comparison of the architectures of recent large language models (LLMs) including DeepSeek-V3, OLMo 2, Gemma 3, Mistral Small 3.1, Llama 4, Qwen3, SmolLM3, and Kimi 2, focusing on key design choices and their impact on performance and efficiency.

    1. **DeepSeek V3/R1**:
    - Uses Multi-Head Latent Attention (MLA) and Mixture-of-Experts (MoE) for efficiency.
    - MLA compresses key and value tensors to reduce KV cache memory usage.
    - MoE activates only a subset of experts per token, improving inference efficiency.

    2. **OLMo 2**:
    - Focuses on transparency in training data and code.
    - Uses RMSNorm layers placed after attention and feed-forward modules (Post-Norm).
    - Introduces QK-Norm, an additional RMSNorm layer applied to queries and keys inside the attention mechanism.

    3. **Gemma 3**:
    - Employs sliding window attention to reduce memory requirements in the KV cache.
    - Uses a 5:1 ratio of sliding window attention to global attention layers.
    - Combines Pre-Norm and Post-Norm RMSNorm layers around the attention module.

    4. **Mistral Small 3.1**:
    - Outperforms Gemma 3 27B on several benchmarks while being faster.
    - Uses a standard architecture with a custom tokenizer and reduced KV cache and layer count.

    5. **Llama 4**:
    - Adopts an MoE approach similar to DeepSeek V3 but with fewer, larger experts.
    - Alternates MoE and dense modules in every other transformer block.

    6. **Qwen3**:
    - Comes in both dense and MoE variants.
    - Dense models are easier to fine-tune and deploy, while MoE models are optimized for scaling inference.

    7. **SmolLM3**:
    - Uses No Positional Embeddings (NoPE), omitting explicit positional information injection.
    - NoPE improves length generalization, meaning performance deteriorates less with increased sequence length.

    8. **Kimi K2 and Kimi K2 Thinking**:
    - Uses a variant of the Muon optimizer over AdamW.
    - Kimi K2 Thinking extends the context size to 256k tokens.

    9. **GPT-OSS**:
    - OpenAI's first open-weight models since GPT-2.
    - Uses sliding window attention and a width-versus-depth trade-off.

    10. **Grok 2.5**:
    - Uses a small number of large experts and a shared expert module.
    - Reflects an older trend in MoE architectures.

    11. **GLM-4.5**:
    - Comes in two variants: a 355-billion-parameter model and a more compact 106-billion-parameter version.
    - Uses a shared expert and starts with several dense layers before introducing MoE blocks.

    12. **Qwen3-Next**:
    - Introduces a Gated DeltaNet + Gated Attention hybrid mechanism.
    - Uses Multi-Token Prediction (MTP) for efficiency.

    13. **MiniMax-M2**:
    - Uses per-layer QK-Norm and partial RoPE.
    - More "sparse" than Qwen3, with fewer active experts per token.

    14. **Kimi Linear**:
    - Modifies the linear attention mechanism with Kimi Delta Attention (KDA).
    - Combines Gated DeltaNet with Multi-Head Latent Attention (MLA).

    15. **Olmo 3 Thinking**:
    - Uses sliding window attention and YaRN for context extension.
    - Comes in base, instruct, and reasoning variants.

    16. **DeepSeek V3.2**:
    - Adds a sparse attention mechanism to improve efficiency.
    - On par with GPT-5.1 and Gemini 3.0 Pro on certain benchmarks.

    17. **Mistral 3**:
    - First MoE model since Mixtral in 2023.
    - Partnered with NVIDIA for optimization on Blackwell chips.

    18. **Nemotron 3**:
    - A Transformer-Mamba hybrid architecture.
    - Interleaves Mamba-2 sequence-modeling blocks with sparse MoE feed-forward layers.

    19. **Xiaomi MiMo-V2-Flash**:
    - Uses sliding window attention in a 5:1 ratio with global attention.
    - Employs multi-token prediction (MTP) for efficiency.

    20. **Arcee AI Trinity Large**:
    - Uses alternating local:global attention layers, NoPE, and gated attention.
    - Introduces depth-scaled sandwich norm for training stability.
  6. Alibaba’s Qwen team released the Qwen 3 model family, offering a range of sizes and capabilities. The article discusses the model's features, performance, and the well-coordinated release across the LLM ecosystem, highlighting the trend of better models running on the same hardware.
  7. A Reddit thread discussing preferred local Large Language Model (LLM) setups for tasks like summarizing text, coding, and general use. Users share their model choices (Gemma, Qwen, Phi, etc.) and frameworks (llama.cpp, Ollama, EXUI) along with potential issues and configurations.

    | **Model** | **Use Cases** | **Size (Parameters)** | **Approx. VRAM (Q4 Quantization)** | **Approx. RAM (Q4)** | **Notes/Requirements** |
    |----------------|---------------------------------------------------|------------------------|-----------------------------------|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
    | **Gemma 3 (Meta)** | Summarization, conversational tasks, image recognition, translation, simple writing | 3B, 4B, 7B, 8B, 12B, 27B+ | 2-4GB (3B), 4-6GB (7B), 8-12GB (12B) | 4-8GB (3B), 8-12GB (7B), 16-24GB (12B) | Excellent performance for its size. Recent versions have had memory leak issues (see Reddit post – use Ollama 0.6.6 or later, but even that may not be fully fixed). QAT versions are highly recommended. |
    | **Qwen 2.5 (Alibaba)** | Summarization, coding, reasoning, decision-making, technical material processing | 3.5B, 7B, 72B | 2-3GB (3.5B), 4-6GB (7B), 26-30GB (72B) | 4-6GB (3.5B), 8-12GB (7B), 50-60GB (72B) | Qwen models are known for strong performance. Coder versions specifically tuned for code generation. |
    | **Qwen3 (Alibaba - upcoming)**| General purpose, likely similar to Qwen 2.5 with improvements | 70B | Estimated 25-30GB (Q4) | 50-60GB | Expected to be a strong competitor. |
    | **Llama 3 (Meta)**| General purpose, conversation, writing, coding, reasoning | 8B, 13B, 70B+ | 4-6GB (8B), 7-9GB (13B), 25-30GB (70B) | 8-12GB (8B), 14-18GB (13B), 50-60GB (70B) | Current state-of-the-art open-source model. Excellent balance of performance and size. |
    | **YiXin (01.AI)** | Reasoning, brainstorming | 72B | ~26-30GB (Q4) | ~50-60GB | A powerful model focused on reasoning and understanding. Similar VRAM requirements to Qwen 72B. |
    | **Phi-4 (Microsoft)** | General purpose, writing, coding | 14B | ~7-9GB (Q4) | 14-18GB | Smaller model, good for resource-constrained environments, but may not match larger models in complexity. |
    | **Ling-Lite** | RAG (Retrieval-Augmented Generation), fast processing, text extraction | Variable | Varies with size | Varies with size | MoE (Mixture of Experts) model known for speed. Good for RAG applications where quick responses are important. |

    **Key Considerations:**

    * **Quantization:** The VRAM and RAM estimates above are based on 4-bit quantization (Q4). Lower quantization (e.g., Q2) will reduce memory usage further, but *may* impact quality. Higher quantization (e.g., Q8, FP16) will increase quality but require significantly more memory.
    * **Frameworks:** Popular frameworks for running these models locally include:
    * **llama.cpp:** Highly optimized for CPU and GPU, especially on Apple Silicon.
    * **Ollama:** Simplified setup and management of LLMs. (Be aware of the Gemma 3 memory leak issue!)
    * **Text Generation WebUI (oobabooga):** Web-based interface with many features and customization options.
    * **Hardware:** A dedicated GPU with sufficient VRAM is highly recommended for decent performance. CPU-only inference is possible but can be slow. More RAM is generally better, even if the model fits in VRAM.
    * **Context Length:** The "40k" context mentioned in the Reddit post refers to the maximum number of tokens (words or sub-words) the model can process at once. Longer context lengths require more memory.
  8. This document details how to run Qwen models locally using the Text Generation Web UI (oobabooga), covering installation, setup, and launching the web interface.
  9. "OpenHands LM is built on the foundation of Qwen Coder 2.5 Instruct 32B, leveraging its powerful base capabilities for coding tasks."
    2025-04-02 Tags: , , , , , by klotz
  10. A review of the Qwen2.5-VL-32B large language model, noting its performance, capabilities, and how it runs on a 64GB Mac. Includes a demonstration with a map image and performance statistics.
    2025-03-26 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "qwen"

About - Propulsed by SemanticScuttle