- Proxy fine-tuning is a method to improve large pre-trained language models without directly accessing their weights.
- It operates on top of black-box LLMs by utilizing only their predictions.
- The approach combines elements of retrieval-based techniques, fine-tuning, and domain-specific adaptations.
- Proxy fine-tuning can be used to achieve the performance of heavily-tuned large models by only tuning smaller models.
- standardization, governance, simplified troubleshooting, and reusability in ML application development.
- integrations with vector databases and LLM providers to support new applications -
provides tutorials on integrating
This article provides a beginner-friendly introduction to Large Language Models (LLMs) and explains the key concepts in a clear and organized way.
AI Helps Make Web Scraping Faster And Easier: Scrapegraph-ai is a new tool that uses large language models (LLMs) to automate the process of web scraping and data processing.
Service Development Kit that uses Terraform, AWS ECS, Rust, Actix App, Postgress RDS, LLM, RAG, Cloudflare
• step-by-step guide on how to set up the service development kit, including creating an SSL certificate, setting up Terraform, and configuring Cloudflare.
• Rust, LLM, and RAG in the service development kit.
• A beginner's guide to understanding Hugging Face Transformers, a library that provides access to thousands of pre-trained transformer models for natural language processing, computer vision, and more.
• The guide covers the basics of Hugging Face Transformers, including what it is, how it works, and how to use it with a simple example of running Microsoft's Phi-2 LLM in a notebook
• The guide is designed for non-technical individuals who want to understand open-source machine learning without prior knowledge of Python or machine learning.
LangChain has many advanced retrieval methods to help address these challenges. (1) Multi representation indexing: Create a document representation (like a summary) that is well-suited for retrieval (read about this using the Multi Vector Retriever in a blog post from last week). (2) Query transformation: in this post, we'll review a few approaches to transform humans questions in order to improve retrieval. (3) Query construction: convert human question into a particular query syntax or language, which will be covered in a future post
The Pipe is a multimodal-first tool for feeding files and web pages into vision-language models such as GPT-4V. It is best for LLM and RAG applications that want to support comprehensive textual and visual understanding across a wide range of data sources. The Pipe is available as a 24/7 hosted API at thepi.pe, or it can be set up locally to let you run the compute.