A collection of Python examples demonstrating the use of Mistral.rs, a Rust library for working with mistral models.
Utilities for Llama.cpp, OpenAI, Anthropic, Mistral-rs. A collection of tools for interacting with various large language models. The code is written in Rust and includes functions for loading models, tokenization, prompting, text generation, and more.
Mistral.rs is a fast LLM inference platform supporting inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings. It supports the latest Llama and Phi models, as well as X-LoRA and LoRA support. The project aims to provide the fastest LLM inference platform possible.