AGNTCY is building the Internet of Agents to be accessible for all, focusing on innovation, development, and maintenance of software components and services for agentic workflows and multi-agent applications.
**Discover:**
**1. Agent directory**
- Registry for agent publishing and discovery
- Tracks reputation and quality
**2. Open agent schema framework**
- Standard metadata format for agent capabilities
- Verification for agent providers
- Specification at github.com/agntcy/oasf
**Compose:**
**1. Agent connect protocol and SDK**
- Standardized agent communication across frameworks
- Manages message passing, state, and context
- Specification at github.com/agntcy/acp-spec
**What could these look like in action?**
A developer can find suitable agents in the directory (using OASF) and enable their communication with the agent connect protocol, regardless of frameworks.
AGNTCY is an open-source collective building infrastructure for AI agents to collaborate, led by Cisco, LangChain, Galileo, and other contributors. The initiative aims to create an open, interoperable foundation for agentic AI systems to work together seamlessly across different frameworks and vendors.
AGNTCY plans to develop key components such as an agent directory, an open agent schema framework, and an agent connect protocol to facilitate this interoperability.
A consortium of Cisco, Galileo, and LangChain proposes an open, scalable way to connect and coordinate AI across different frameworks, vendors, and infrastructure to manage the rapid evolution of AI agents.
LangChain was once a promising framework for building AI applications powered by Large Language Models (LLMs). However, developers are now quitting LangChain due to issues like unnecessary complexity, unstable updates, and inconsistent documentation. The article explores the reasons behind this trend and offers insights into alternative solutions.
Learn how to use Okta FGA to secure your LangChain RAG agent in Python.
An exploration of Retrieval-Augmented Generation (RAG) using Langchain and LlamaIndex, explaining how these tools can enhance Large Language Models (LLMs) by combining retrieval and generation techniques.
Turn your Pandas data frame into a knowledge graph using LLMs. Learn how to build your own LLM graph-builder, implement LLMGraphTransformer by LangChain, and perform QA on your knowledge graph.
This article explores how to implement a retriever over a knowledge graph containing structured information to power RAG (Retrieval-Augmented Generation) applications.
IncarnaMind enables chatting with personal documents (PDF, TXT) using Large Language Models (LLMs) like GPT. It uses a Sliding Window Chunking mechanism and Ensemble Retriever for efficient querying.
This page lists various tools that can be integrated with LangChain, categorized by their functionalities. These tools range from search engines and code interpreters to API connectors and data manipulation tools.