Tags: nvidia* + machine learning*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. Ollama has partnered with NVIDIA to optimize performance on the new NVIDIA DGX Spark, powered by the GB10 Grace Blackwell Superchip, enabling fast prototyping and running of local language models.
  2. Nvidia's DGX Spark is a relatively affordable AI workstation that prioritizes capacity over raw speed, enabling it to run models that consumer GPUs cannot. It features 128GB of memory and is based on the Blackwell architecture.
  3. This article details how to accelerate deep learning and LLM inference using Apache Spark, focusing on distributed inference strategies. It covers basic deployment with `predict_batch_udf`, advanced deployment with inference servers like NVIDIA Triton and vLLM, and deployment on cloud platforms like Databricks and Dataproc. It also provides guidance on resource management and configuration for optimal performance.
  4. Running GenAI models is easy. Scaling them to thousands of users, not so much. This guide details avenues for scaling AI workloads from proofs of concept to production-ready deployments, covering API integration, on-prem deployment considerations, hardware requirements, and tools like vLLM and Nvidia NIMs.
  5. NVIDIA DGX Spark is a desktop-friendly AI supercomputer powered by the NVIDIA GB10 Grace Blackwell Superchip, delivering 1000 AI TOPS of performance with 128GB of memory. It is designed for prototyping, fine-tuning, and inference of large AI models.
  6. Nvidia Researchers Developed and Open-Sourced a Standardized Machine Learning Framework for Time Series Forecasting

    Nvidia researchers have developed and open-sourced a standardized machine learning framework called TSPP (Time Series Prediction Platform) for time series forecasting. The framework is des
    igned to facilitate the integration and comparison of various models and datasets, covering all aspects of the machine learning process from data handling to model deployment.

    The TSPP framework includes critical components like data handling, model design, optimization, and training, as well as inference, predictions on unseen data, and a tuner component that s
    elects the top configuration for post-deployment monitoring and uncertainty quantification. The methodology of TSPP is comprehensive, covering all aspects of the machine learning process.
    2024-01-05 Tags: , , by klotz
  7. Delving into transformer networks
  8. 2021-04-13 Tags: , , , by klotz
  9. 2021-02-01 Tags: , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "nvidia+machine learning"

About - Propulsed by SemanticScuttle