Mem0: The Memory Layer for Personalized AI. Provides an intelligent, adaptive memory layer for Large Language Models (LLMs), enhancing personalized AI experiences.
This JavaScript guide demonstrates the basics of E2B: connecting to an LLM, generating Python code, and executing it securely in an E2B sandbox.
- Composio: Streamline agent development with tool integrations.
- Julep: Build stateful AI agents with efficient context management.
- E2B: Secure sandbox for AI execution with code interpreter capabilities.
- Camel-ai: Framework for building and studying multi-agent systems.
- CopilotKit: Integrate AI copilot features into React applications.
- Aider: AI-powered pair-programmer for code assistance and repo management.
- Haystack: Composable pipeline framework for RAG applications.
- Pgvectorscale: High-performance vector database extension for PostgreSQL.
- GPTCache: Semantic caching solution for reducing LLM costs.
- Mem0 (EmbedChain): Add persistent memory to LLMs for personalized interactions.
- FastEmbed: Fast and lightweight library for embedding generation.
- Instructor: Streamline LLM output validation and extraction of structured data.
- LiteLLM: Drop-in replacement for OpenAI models, supporting various providers
A Github Gist containing a Python script for text classification using the TxTail API
Walkthrough on building a Q and A pipeline using various tools, and distributing it with ModelKits for collaboration.
This guide explains how to build and use knowledge graphs with R2R. It covers setup, basic example, construction, navigation, querying, visualization, and advanced examples.
R2R (RAG to Riches) is a platform designed to help developers build, scale, and manage user-facing Retrieval-Augmented Generation (RAG) applications. It bridges the gap between experimentation and deployment of state-of-the-art RAG applications by offering a complete platform with a containerized RESTful API. The platform includes features like multimodal ingestion, hybrid search, GraphRAG, user and document management, and observability/analytics.
#### Key Features
- **Multimodal Ingestion:** Supports a wide range of file types including .txt, .pdf, .json, .png, .mp3, and more.
- **Hybrid Search:** Combines semantic and keyword search with reciprocal rank fusion for improved relevancy.
- **Graph RAG:** Automatically extracts relationships and constructs knowledge graphs.
- **App Management:** Efficient management of documents and users with full authentication.
- **Observability:** Allows performance analysis and observation of the RAG engine.
- **Configurable:** Uses intuitive configuration files for application provisioning.
- **Application:** Includes an open-source React+Next.js app with optional authentication for GUI interaction.
Automates conversion of various file types and GitHub repositories into LLM-ready Markdown documents.
A mini python based tool designed to convert various types of files and GitHub repositories into LLM-ready Markdown documents with metadata, table of contents, and consistent heading styles. Supports multiple file types, handles zip files, and has GitHub integration.
txtai is an open-source embeddings database for various applications such as semantic search, LLM orchestration, language model workflows, and more. It allows users to perform vector search with SQL, create embeddings for text, audio, images, and video, and run pipelines powered by language models for question-answering, transcription, translation, and more.