klotz: large language models* + fine-tuning*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. The post discusses the feasibility of fine-tuning a decoder-encoder model to translate Egyptian Middle Kingdom hieroglyphics into English. The author suggests that with sufficient training data and a tokenizer that includes Egyptian characters, the model could learn to interpret hieroglyphics fluently. Comments from users mention using plugins and existing knowledge in models as alternatives to fine-tuning.
  2. A list of 13 open-source software for building and managing production-ready AI applications. The tools cover various aspects of AI development, including LLM tool integration, vector databases, RAG pipelines, model training and deployment, LLM routing, data pipelines, AI agent monitoring, LLM observability, and AI app development.
    1. Composio - Seamless integration of tools with LLMs.
    2. Weaviate - AI-native vector database for AI apps.
    3. Haystack - Framework for building efficient RAG pipelines.
    4. LitGPT - Pretrain, fine-tune, and deploy models at scale.
    5. DsPy - Framework for programming LLMs.
    6. Portkey's Gateway - Reliably route to 200+ LLMs with one API.
    7. AirByte - Reliable and extensible open-source data pipeline.
    8. AgentOps - Agents observability and monitoring.
    9. ArizeAI's Phoenix - LLM observability and evaluation.
    10. vLLM - Easy, fast, and cheap LLM serving for everyone.
    11. Vercel AI SDK - Easily build AI-powered products.
    12. LangGraph - Build language agents as graphs.
    13. Taipy - Build AI apps in Python.
  3. This article provides a comprehensive guide on fine-tuning the Llama 3.1 language model using Unsloth for efficient parameter-efficient training. It covers concepts like supervised fine-tuning, LoRA, QLoRA, and practical steps for training on a high-quality dataset.
  4. This article provides a step-by-step guide on fine-tuning the Llama 3 language model for customer service use cases. It covers the process of data preparation, fine-tuning techniques, and the benefits of leveraging LLMs in customer service applications.
  5. Learn how to fine-tune large language models like Llama 3 for function calling, enabling interaction with external tools and APIs for tasks like web search and math operations.
  6. This guide demonstrates how to execute end-to-end LLM workflows for developing and productionizing LLMs at scale. It covers data preprocessing, fine-tuning, evaluation, and serving.
  7. This post discusses a study that finds that refusal behavior in language models is mediated by a single direction in the residual stream of the model. The study presents an intervention that bypasses refusal by ablating this direction, and shows that adding in this direction induces refusal. The study is part of a scholars program and provides more details in a forthcoming paper.
  8. This article announces a comprehensive course on fine-tuning large language models (LLMs) offered on the freeCodeCamp.org YouTube channel. The course, developed by Krish Naik, covers topics such as QLORA, LORA, quantization with LLama2, gradient, and Google Gemma Model, among others. The course aims to help learners deepen their understanding of machine learning and artificial intelligence.
  9. In this tutorial, learn how to improve the performance of large language models (LLMs) by utilizing a proxy tuning approach, which enables more efficient fine-tuning and better integration with the AI model.
    2024-05-11 Tags: , , , by klotz
  10. - Proxy fine-tuning is a method to improve large pre-trained language models without directly accessing their weights.
    - It operates on top of black-box LLMs by utilizing only their predictions.
    - The approach combines elements of retrieval-based techniques, fine-tuning, and domain-specific adaptations.
    - Proxy fine-tuning can be used to achieve the performance of heavily-tuned large models by only tuning smaller models.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: large language models + fine-tuning

About - Propulsed by SemanticScuttle