An open-source project offering a functional RAG UI for document QA, suitable for both end-users and developers. It supports various LLM providers, is customizable, and offers multi-modal QA, citations, and complex reasoning methods.
Mem0: The Memory Layer for Personalized AI. Provides an intelligent, adaptive memory layer for Large Language Models (LLMs), enhancing personalized AI experiences.
An extension for Oobabooga's Text-Generation Web UI that retrieves and adds web content to the context of prompts for more informative AI responses.
Walkthrough on building a Q and A pipeline using various tools, and distributing it with ModelKits for collaboration.
This guide explains how to build and use knowledge graphs with R2R. It covers setup, basic example, construction, navigation, querying, visualization, and advanced examples.
R2R (RAG to Riches) is a platform designed to help developers build, scale, and manage user-facing Retrieval-Augmented Generation (RAG) applications. It bridges the gap between experimentation and deployment of state-of-the-art RAG applications by offering a complete platform with a containerized RESTful API. The platform includes features like multimodal ingestion, hybrid search, GraphRAG, user and document management, and observability/analytics.
#### Key Features
- **Multimodal Ingestion:** Supports a wide range of file types including .txt, .pdf, .json, .png, .mp3, and more.
- **Hybrid Search:** Combines semantic and keyword search with reciprocal rank fusion for improved relevancy.
- **Graph RAG:** Automatically extracts relationships and constructs knowledge graphs.
- **App Management:** Efficient management of documents and users with full authentication.
- **Observability:** Allows performance analysis and observation of the RAG engine.
- **Configurable:** Uses intuitive configuration files for application provisioning.
- **Application:** Includes an open-source React+Next.js app with optional authentication for GUI interaction.
A mini python based tool designed to convert various types of files and GitHub repositories into LLM-ready Markdown documents with metadata, table of contents, and consistent heading styles. Supports multiple file types, handles zip files, and has GitHub integration.
Verba is an open-source application designed to offer an end-to-end, streamlined, and user-friendly interface for Retrieval-Augmented Generation (RAG) out of the box. It supports various RAG techniques, data types, LLM providers, and offers Docker support and a fully-customizable frontend.
This is a local LLM chatbot project with RAG for processing PDF input files
In this tutorial, we will build a RAG system with a self-querying retriever in the LangChain framework. This will enable us to filter the retrieved movies using metadata, thus providing more meaningful movie recommendations.