Unblocked is an AI tool that augments code with knowledge from systems like GitHub, Slack, Confluence, and Jira to provide quick, accurate answers about your application.
IncarnaMind enables chatting with personal documents (PDF, TXT) using Large Language Models (LLMs) like GPT. It uses a Sliding Window Chunking mechanism and Ensemble Retriever for efficient querying.
LlamaCards is a web application that provides a dynamic interface for interacting with Large Language Models (LLMs) in real-time.
GitHub Copilot now offers text completion for pull request descriptions. This beta feature helps developers write more effective descriptions by suggesting completions based on context.
Large Model Proxy is designed to make it easy to run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources.
- Composio: Streamline agent development with tool integrations.
- Julep: Build stateful AI agents with efficient context management.
- E2B: Secure sandbox for AI execution with code interpreter capabilities.
- Camel-ai: Framework for building and studying multi-agent systems.
- CopilotKit: Integrate AI copilot features into React applications.
- Aider: AI-powered pair-programmer for code assistance and repo management.
- Haystack: Composable pipeline framework for RAG applications.
- Pgvectorscale: High-performance vector database extension for PostgreSQL.
- GPTCache: Semantic caching solution for reducing LLM costs.
- Mem0 (EmbedChain): Add persistent memory to LLMs for personalized interactions.
- FastEmbed: Fast and lightweight library for embedding generation.
- Instructor: Streamline LLM output validation and extraction of structured data.
- LiteLLM: Drop-in replacement for OpenAI models, supporting various providers
Chat with GitHub Copilot in Emacs!
An extension that automatically unloads and reloads your model, freeing up VRAM for other programs.
A Github Gist containing a Python script for text classification using the TxTail API
This page provides information about LLooM, a tool that uses raw LLM logits to weave threads in a probabilistic way. It includes instructions on how to use LLooM with various environments, such as vLLM, llama.cpp, and OpenAI. The README also explains the parameters and configurations for LLooM.