The RTX 3090 offers a compelling combination of performance and 24GB of VRAM, making it a better choice for local LLM and AI workloads than newer Nvidia Blackwell GPUs like the RTX 5070 and even the RTX 5080, due to VRAM limitations and pricing.
Based on the discussion, /u/septerium achieved optimal performance for GLM 4.7 Flash (UD-Q6_K_XL) on an RTX 5090 using these specific settings and parameters:
- GPU: NVIDIA RTX 5090.
- 150 tokens/s
- Context: 48k tokens squeezed into VRAM.
- UD-Q6_K_XL (Unsloth quantized GGUF).
- Flash Attention: Enabled (-fa on).
- Context Size: 48,000 (--ctx-size 48000).
- GPU Layers: 99 (-ngl 99) to ensure the entire model runs on the GPU.
- Sampler & Inference Parameters
- Temperature: 0.7 (recommended by Unsloth for tool calls).
- Top-P: 1.0.
- Min-P: 0.01.
- Repeat Penalty: Must be disabled (llama.cpp does this by default, but users warned other platforms might not).
CUDA Tile is a new Python package that simplifies GPU programming by automatically tiling loops, handling data transfer, and optimizing memory access. It allows developers to write concise and readable code that leverages the full power of NVIDIA GPUs without needing to manually manage the complexities of parallel programming.
NVIDIA Nemotron Parse v1.1 is designed to understand document semantics and extract text and tables elements with spatial grounding. It transforms unstructured documents into actionable and machine-usable representations.
A new patch enables Nvidia GPU support on Raspberry Pi 5 and Rockchip devices, allowing for GPU-accelerated compute tasks. The article details the setup process, performance testing with llama.cpp, and current limitations with display output.
NVIDIA AI releases Nemotron-Elastic-12B, a 12B parameter reasoning model that embeds nested 9B and 6B variants in the same parameter space, allowing for multiple model sizes from a single training job.
This blog post details how to build a natural language Bash agent using NVIDIA Nemotron Nano v2, requiring roughly 200 lines of Python code. It covers the core components, safety considerations, and offers both a from-scratch implementation and a simplified approach using LangGraph.
This discussion details performance benchmarks of llama.cpp on an NVIDIA DGX Spark, including tests for various models (gpt-oss-20b, gpt-oss-120b, Qwen3, Qwen2.5, Gemma, GLM) with different context depths and batch sizes.
Ollama has partnered with NVIDIA to optimize performance on the new NVIDIA DGX Spark, powered by the GB10 Grace Blackwell Superchip, enabling fast prototyping and running of local language models.
This article details the integration of Docker Model Runner with the NVIDIA DGX Spark, enabling faster and simpler local AI model development. It covers setup, usage, and benefits like data privacy, offline availability, and ease of customization.