This discussion details performance benchmarks of llama.cpp on an NVIDIA DGX Spark, including tests for various models (gpt-oss-20b, gpt-oss-120b, Qwen3, Qwen2.5, Gemma, GLM) with different context depths and batch sizes.
Ollama has partnered with NVIDIA to optimize performance on the new NVIDIA DGX Spark, powered by the GB10 Grace Blackwell Superchip, enabling fast prototyping and running of local language models.
This article details the integration of Docker Model Runner with the NVIDIA DGX Spark, enabling faster and simpler local AI model development. It covers setup, usage, and benefits like data privacy, offline availability, and ease of customization.
Simon Willison received a preview unit of the NVIDIA DGX Spark, a desktop "AI supercomputer" retailing around $4,000. He details his experience setting it up and navigating the ecosystem, highlighting both the hardware's impressive specs (ARM64, 128GB RAM, Blackwell GPU) and the initial software challenges.
Key takeaways:
* **Hardware:** The DGX Spark is a compact, powerful machine aimed at AI researchers.
* **Software Hurdles:** Initial setup was complicated by the need for ARM64-compatible software and CUDA configurations, though NVIDIA has significantly improved documentation recently.
* **Tools & Ecosystem:** Claude Code was invaluable for troubleshooting. Ollama, `llama.cpp`, LM Studio, and vLLM are already gaining support for the Spark, indicating a growing ecosystem.
* **Networking:** Tailscale simplifies remote access.
* **Early Verdict:** It's too early to definitively recommend the device, but recent ecosystem improvements are promising.
Nvidia's DGX Spark is a relatively affordable AI workstation that prioritizes capacity over raw speed, enabling it to run models that consumer GPUs cannot. It features 128GB of memory and is based on the Blackwell architecture.
NVIDIA DGX Spark is a desktop-friendly AI supercomputer powered by the NVIDIA GB10 Grace Blackwell Superchip, delivering 1000 AI TOPS of performance with 128GB of memory. It is designed for prototyping, fine-tuning, and inference of large AI models.