A collection of Python examples demonstrating the use of Mistral.rs, a Rust library for working with mistral models.
- Composio: Streamline agent development with tool integrations.
- Julep: Build stateful AI agents with efficient context management.
- E2B: Secure sandbox for AI execution with code interpreter capabilities.
- Camel-ai: Framework for building and studying multi-agent systems.
- CopilotKit: Integrate AI copilot features into React applications.
- Aider: AI-powered pair-programmer for code assistance and repo management.
- Haystack: Composable pipeline framework for RAG applications.
- Pgvectorscale: High-performance vector database extension for PostgreSQL.
- GPTCache: Semantic caching solution for reducing LLM costs.
- Mem0 (EmbedChain): Add persistent memory to LLMs for personalized interactions.
- FastEmbed: Fast and lightweight library for embedding generation.
- Instructor: Streamline LLM output validation and extraction of structured data.
- LiteLLM: Drop-in replacement for OpenAI models, supporting various providers
A Github Gist containing a Python script for text classification using the TxTail API
Automates conversion of various file types and GitHub repositories into LLM-ready Markdown documents.
A mini python based tool designed to convert various types of files and GitHub repositories into LLM-ready Markdown documents with metadata, table of contents, and consistent heading styles. Supports multiple file types, handles zip files, and has GitHub integration.
txtai is an open-source embeddings database for various applications such as semantic search, LLM orchestration, language model workflows, and more. It allows users to perform vector search with SQL, create embeddings for text, audio, images, and video, and run pipelines powered by language models for question-answering, transcription, translation, and more.
A light-weight codebase that enables memory-efficient and performant finetuning of Mistral's models. It is based on LoRA, a training paradigm where most weights are frozen and only 1-2% additional weights in the form of low-rank matrix perturbations are trained.
Verba is an open-source application designed to offer an end-to-end, streamlined, and user-friendly interface for Retrieval-Augmented Generation (RAG) out of the box. It supports various RAG techniques, data types, LLM providers, and offers Docker support and a fully-customizable frontend.
This is a local LLM chatbot project with RAG for processing PDF input files
Scrapegraph-ai is a Python library for web scraping using AI. It provides a SmartScraper class that allows users to extract information from websites using a prompt. The library uses LLM models like Ollama, OpenAI, Azure, Gemini, and others for information extraction.