klotz: nvidia* + llm*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This article introduces model merging, a technique that combines the weights of multiple customized large language models to increase resource utilization and add value to successful models.
    2024-10-30 Tags: , , by klotz
  2. NVIDIA introduces NIM Agent Blueprints, a collection of pre-trained, customizable AI workflows for common use cases like customer service avatars, PDF extraction, and drug discovery, aiming to simplify generative AI development for businesses.
    2024-08-30 Tags: , , , , by klotz
  3. Run:ai offers a platform to accelerate AI development, optimize GPU utilization, and manage AI workloads. It is designed for GPUs, offers CLI & GUI interfaces, and supports various AI tools & frameworks.
  4. A startup called Backprop has demonstrated that a single Nvidia RTX 3090 GPU, released in 2020, can handle serving a modest large language model (LLM) like Llama 3.1 8B to over 100 concurrent users with acceptable throughput. This suggests that expensive enterprise GPUs may not be necessary for scaling LLMs to a few thousand users.
  5. A method that uses instruction tuning to adapt LLMs for knowledge-intensive tasks. RankRAG simultaneously trains the models for context ranking and answer generation, enhancing their retrieval-augmented generation (RAG) capabilities.
  6. NVIDIA and Georgia Tech researchers introduce RankRAG, a novel framework instruction-tuning a single LLM for top-k context ranking and answer generation. Aiming to improve RAG systems, it enhances context relevance assessment and answer generation.
  7. - Discusses the use of consumer graphics cards for fine-tuning large language models (LLMs)
    - Compares consumer graphics cards, such as NVIDIA GeForce RTX Series GPUs, to data center and cloud computing GPUs
    - Highlights the differences in GPU memory and price between consumer and data center GPUs
    - Shares the author's experience using a GeForce 3090 RTX card with 24GB of GPU memory for fine-tuning LLMs
    2024-02-02 Tags: , , , , , by klotz
  8. ChatQA, a new family of conversational question-answering (QA) models developed by NVIDIA AI. These models employ a unique two-stage instruction tuning method that significantly improves zero-shot conversational QA results from large language models (LLMs). The ChatQA-70B variant has demonstrated superior performance compared to GPT-4 across multiple conversational QA datasets.
    2024-01-24 Tags: , , , , by klotz
  9. 2024-01-20 Tags: , , , , , by klotz
  10. Windows only
    2024-01-11 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: nvidia + llm

About - Propulsed by SemanticScuttle