NVIDIA has launched the Gemma 4 model family, designed to operate efficiently across a wide range of hardware, from data centers to edge devices like Jetson. This new generation includes the first Gemma MoE model and supports over 140 languages, enabling advanced capabilities like reasoning, code generation, and multimodal input.
Developers can fine-tune and deploy Gemma 4 using tools like NeMo Automodel and NVIDIA NIM, with commercial licensing available. The models are optimized for local deployment with frameworks such as vLLM, Ollama, and llama.cpp, offering flexibility for various use cases, including robotics, smart machines, and secure on-premise applications.