klotz: fine-tuning*

Bookmarks on this page are managed by an admin user.

0 bookmark(s) - Sort by: Date / Title ↑ / - Bookmarks from other users for this tag

  1. efficient method for fine-tuning LLM using LoRA and QLoRA, making it possible to train them even on consumer hardware
  2. Generate instruction datasets for fine-tuning Large Language Models (LLMs) using lightweight libraries and documents.
  3. In this tutorial, learn how to improve the performance of large language models (LLMs) by utilizing a proxy tuning approach, which enables more efficient fine-tuning and better integration with the AI model.
    2024-05-11 Tags: , , , by klotz
  4. - Proxy fine-tuning is a method to improve large pre-trained language models without directly accessing their weights.
    - It operates on top of black-box LLMs by utilizing only their predictions.
    - The approach combines elements of retrieval-based techniques, fine-tuning, and domain-specific adaptations.
    - Proxy fine-tuning can be used to achieve the performance of heavily-tuned large models by only tuning smaller models.
  5. DocLLM is a lightweight extension to traditional LLMs for reasoning over visual documents, considering both textual semantics and spatial layout. It avoids expensive image encoders and focuses on bounding box information. It outperforms SotA LLMs on 14 out of 16 datasets across all tasks and generalizes well to previously unseen datasets.

    Keywords:

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: fine-tuning

About - Propulsed by SemanticScuttle