This repository provides the GGUF quantized weights for Qwen3.6-27B, a flagship-level coding model designed for stability and real-world utility. The model features significant upgrades in agentic coding capabilities, allowing it to handle frontend workflows and repository-level reasoning with high precision. It also introduces thinking preservation, which enables the model to retain reasoning context from historical messages to improve iterative development.
Key technical highlights:
* Native context length of 262,144 tokens, extensible up to 1,010,000 via RoPE scaling (YaRN).
* Enhanced tool-calling capabilities for complex agentic tasks.
* Support for multimodal inputs including images and video.
* Optimized for various inference frameworks like SGLang, vLLM, and KTransformers.