Tags: nvidia*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. Not an Ubuntu bug, they say.
    2025-05-31 Tags: , , , , by klotz
  2. old bug report
    2025-05-31 Tags: , , , , , , by klotz
  3. This comment details a workaround for nvidia-driver-390 on Ubuntu systems with kernel 6.5.0. It links to related bug reports and provides instructions to add a PPA and install updated drivers.
    ```
    sudo add-apt-repository ppa:dtl131/nvidiaexp
    sudo apt update
    sudo apt install nvidia-drivers-390
    ```
    2025-05-31 Tags: , , , , , by klotz
  4. A user, nicholasdavidroberts, expresses gratitude to Daniel for providing a PPA and patched 390 driver that resolved their NVIDIA driver compilation issues on Ubuntu 22.04 with kernel 6.5.0-14.

    ```
    execute_with_retries apt-get install -y -qq gcc-12
    update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 11
    update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 12
    update-alternatives --set gcc /usr/bin/gcc-12
    ```
    2025-05-31 Tags: , , , , , , , , by klotz
  5. A user reports issues compiling the NVIDIA driver with kernel 6.5.0-14 on Ubuntu 22.04, specifically for a GeForce GT 750M. They provide a patch and instructions for creating a custom deb package to resolve the issue.
    2025-05-31 Tags: , , , , , , , , by klotz
  6. This article details how to accelerate deep learning and LLM inference using Apache Spark, focusing on distributed inference strategies. It covers basic deployment with `predict_batch_udf`, advanced deployment with inference servers like NVIDIA Triton and vLLM, and deployment on cloud platforms like Databricks and Dataproc. It also provides guidance on resource management and configuration for optimal performance.
  7. Running GenAI models is easy. Scaling them to thousands of users, not so much. This guide details avenues for scaling AI workloads from proofs of concept to production-ready deployments, covering API integration, on-prem deployment considerations, hardware requirements, and tools like vLLM and Nvidia NIMs.
  8. This Splunk Lantern article outlines the steps to monitor Gen AI applications with Splunk Observability Cloud, covering setup with OpenTelemetry, NVIDIA GPU metrics, Python instrumentation, and OpenLIT integration to monitor GenAI applications built with technologies like Python, LLMs (OpenAI's GPT-4o, Anthropic's Claude 3.5 Haiku, Meta’s Llama), NVIDIA GPUs, Langchain, and vector databases (Pinecone, Chroma) using Splunk Observability Cloud. It outlines a six-step process:

    1. **Access Splunk Observability Cloud:** Sign up for a free trial if needed.
    2. **Deploy Splunk Distribution of OpenTelemetry Collector:** Use a Helm chart to install the collector in Kubernetes.
    3. **Capture NVIDIA GPU Metrics:** Utilize the NVIDIA GPU Operator and Prometheus receiver in the OpenTelemetry Collector.
    4. **Instrument Python Applications:** Use the Splunk Distribution of OpenTelemetry Python agent for automatic instrumentation and enable Always On Profiling.
    5. **Enhance with OpenLIT:** Install and initialize OpenLIT to capture detailed trace data, including LLM calls and interactions with vector databases (with options to disable PII capture).
    6. **Start Using the Data:** Leverage the collected metrics and traces, including features like Tag Spotlight, to identify and resolve performance issues (example given: OpenAI rate limits).

    The article emphasizes OpenTelemetry's role in GenAI observability and highlights how Splunk Observability Cloud facilitates monitoring these complex applications, providing insights into performance, cost, and potential bottlenecks. It also points to resources for help and further information on specific aspects of the process.
  9. NVIDIA DGX Spark is a desktop-friendly AI supercomputer powered by the NVIDIA GB10 Grace Blackwell Superchip, delivering 1000 AI TOPS of performance with 128GB of memory. It is designed for prototyping, fine-tuning, and inference of large AI models.
  10. NVIDIA's Project Aether automates the qualification, testing, configuration, and optimization of Spark workloads for GPU acceleration, enabling enterprises to process data more efficiently and cost-effectively.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "nvidia"

About - Propulsed by SemanticScuttle