klotz: nvidia*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. A method that uses instruction tuning to adapt LLMs for knowledge-intensive tasks. RankRAG simultaneously trains the models for context ranking and answer generation, enhancing their retrieval-augmented generation (RAG) capabilities.
  2. NVIDIA and Georgia Tech researchers introduce RankRAG, a novel framework instruction-tuning a single LLM for top-k context ranking and answer generation. Aiming to improve RAG systems, it enhances context relevance assessment and answer generation.
  3. A discussion post on Reddit's LocalLLaMA subreddit about logging the output of running models and monitoring performance, specifically for debugging errors, warnings, and performance analysis. The post also mentions the need for flags to output logs as flat files, GPU metrics (GPU utilization, RAM usage, TensorCore usage, etc.) for troubleshooting and analytics.
  4. This gist provides instructions on how to create a systemd service that sets the Nvidia power limit after booting up. The power limit can be adjusted to increase longevity or performance, depending on the user's needs. The instructions include checking the current power settings, setting up the systemd service, and an example output after reboot.
  5. - Discusses the use of consumer graphics cards for fine-tuning large language models (LLMs)
    - Compares consumer graphics cards, such as NVIDIA GeForce RTX Series GPUs, to data center and cloud computing GPUs
    - Highlights the differences in GPU memory and price between consumer and data center GPUs
    - Shares the author's experience using a GeForce 3090 RTX card with 24GB of GPU memory for fine-tuning LLMs
    2024-02-02 Tags: , , , , , by klotz
  6. ChatQA, a new family of conversational question-answering (QA) models developed by NVIDIA AI. These models employ a unique two-stage instruction tuning method that significantly improves zero-shot conversational QA results from large language models (LLMs). The ChatQA-70B variant has demonstrated superior performance compared to GPT-4 across multiple conversational QA datasets.
    2024-01-24 Tags: , , , , by klotz
  7. 2024-01-20 Tags: , , , , , by klotz
  8. Windows only
    2024-01-11 Tags: , , , by klotz
  9. 2024-01-11 Tags: , , , by klotz
  10. Nvidia Researchers Developed and Open-Sourced a Standardized Machine Learning Framework for Time Series Forecasting

    Nvidia researchers have developed and open-sourced a standardized machine learning framework called TSPP (Time Series Prediction Platform) for time series forecasting. The framework is des
    igned to facilitate the integration and comparison of various models and datasets, covering all aspects of the machine learning process from data handling to model deployment.

    The TSPP framework includes critical components like data handling, model design, optimization, and training, as well as inference, predictions on unseen data, and a tuner component that s
    elects the top configuration for post-deployment monitoring and uncertainty quantification. The methodology of TSPP is comprehensive, covering all aspects of the machine learning process.
    2024-01-05 Tags: , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: nvidia

About - Propulsed by SemanticScuttle