0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
Arch is an intelligent gateway for agents, designed to securely handle prompts, integrate with APIs, and provide rich observability, built on Envoy Proxy.
LiteLLM is a library to deploy and manage LLM (Large Language Model) APIs using a standardized format. It supports multiple LLM providers, includes proxy server features for load balancing and cost tracking, and offers various integrations for logging and observability.
Large Model Proxy is designed to make it easy to run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources.
Introduces proxy-tuning, a lightweight decoding-time algorithm that operates on top of black-box LMs to achieve the same end as direct tuning. The method tunes a smaller LM, then applies the difference between the predictions of the small tuned and untuned LMs to shift the original predictions of the larger untuned model in the direction of tuning, while retaining the benefits of larger-scale pretraining.
In this tutorial, learn how to improve the performance of large language models (LLMs) by utilizing a proxy tuning approach, which enables more efficient fine-tuning and better integration with the AI model.
First / Previous / Next / Last
/ Page 1 of 0