This repository provides the GGUF quantized weights for Qwen3.6-27B, a flagship-level coding model designed for stability and real-world utility. The model features significant upgrades in agentic coding capabilities, allowing it to handle frontend workflows and repository-level reasoning with high precision. It also introduces thinking preservation, which enables the model to retain reasoning context from historical messages to improve iterative development.
Key technical highlights:
* Native context length of 262,144 tokens, extensible up to 1,010,000 via RoPE scaling (YaRN).
* Enhanced tool-calling capabilities for complex agentic tasks.
* Support for multimodal inputs including images and video.
* Optimized for various inference frameworks like SGLang, vLLM, and KTransformers.
Unsloth AI presents performance benchmarks for Qwen3.6-35B-A3B GGUF quantizations, claiming state-of-the-art results in mean KL divergence across most model sizes. The discussion includes community analysis regarding SWE-bench Verified performance, where some users noted unexpected discrepancies between Qwen3.5 and Qwen3.6 quantization results during coding tasks.
Key points:
- Unsloth ranks first in 21 of 22 model sizes for mean KL divergence.
- Community debate over SWE-bench testing methodology and sample sizes.
- Reported performance variations between different quantization levels (Q4, Q5, Q6, Q8).
- Discussion on system prompt adherence and error rates in coding benchmarks.