klotz: sft*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. The article introduces a new approach to language modeling called test-time scaling, which enhances performance by utilizing additional compute resources during testing. The authors present a method involving a curated dataset and a technique called budget forcing to control compute usage, allowing models to double-check answers and improve reasoning. The approach is demonstrated with the Qwen2.5-32B-Instruct language model, showing significant improvements on competition math questions.

  2. Trained on a vast dataset comprising primarily GPT-4 generated data and supplemented with high-quality information from open datasets in the AI field, this model exhibits exceptional performance across various tasks. It introduces a novel SFT + DPO version, and for those who prefer a different approach, an SFT-only version is also made available

    2024-01-26 Tags: , , , by klotz
  3. ChatQA, a new family of conversational question-answering (QA) models developed by NVIDIA AI. These models employ a unique two-stage instruction tuning method that significantly improves zero-shot conversational QA results from large language models (LLMs). The ChatQA-70B variant has demonstrated superior performance compared to GPT-4 across multiple conversational QA datasets.

    2024-01-24 Tags: , , , , by klotz
  4. 2024-01-17 Tags: , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: sft

About - Propulsed by SemanticScuttle