klotz: fine-tuning* + llm*

Bookmarks on this page are managed by an admin user.

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. In this tutorial, learn how to improve the performance of large language models (LLMs) by utilizing a proxy tuning approach, which enables more efficient fine-tuning and better integration with the AI model.
    2024-05-11 Tags: , , , by klotz
  2. - Proxy fine-tuning is a method to improve large pre-trained language models without directly accessing their weights.
    - It operates on top of black-box LLMs by utilizing only their predictions.
    - The approach combines elements of retrieval-based techniques, fine-tuning, and domain-specific adaptations.
    - Proxy fine-tuning can be used to achieve the performance of heavily-tuned large models by only tuning smaller models.
  3. Generate instruction datasets for fine-tuning Large Language Models (LLMs) using lightweight libraries and documents.
  4. efficient method for fine-tuning LLM using LoRA and QLoRA, making it possible to train them even on consumer hardware

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: fine-tuning + llm

About - Propulsed by SemanticScuttle