This article details the release of Gemma 3, the latest iteration of Google’s open-weights language model. Key improvements include **vision-language capabilities** (using a tailored SigLIP encoder), **increased context length** (up to 128k tokens for larger models), and **architectural changes for improved memory efficiency** (5-to-1 interleaved attention and removal of softcapping). Gemma 3 demonstrates superior performance compared to Gemma 2 across benchmarks and offers models optimized for various use cases, including on-device applications with the 1B model.
Google releases Gemma 3, a new iteration of their Gemma family of models. It ranges from 1B to 27B parameters, supports up to 128k tokens, accepts images and text, and supports 140+ languages. This article details its technical enhancements (longer context, multimodality, multilinguality) and provides information on inference with Hugging Face transformers, on-device deployment, and evaluation.