Interact with opencode server over HTTP. The `opencode serve` command runs a headless HTTP server that exposes an OpenAPI endpoint that an opencode client can use.
The Universal Tool Calling Protocol (UTCP) is an open standard that describes how to call existing tools directly, eliminating the need for wrappers. It focuses on direct communication with tool endpoints (HTTP, gRPC, WebSocket, CLI, etc.) to reduce latency and maintain existing security and billing systems.
This article lists and ranks the top Model Context Protocol (MCP) servers on GitHub as of June 2025, highlighting their capabilities and emphasizing the importance of security when granting agents access to sensitive data. It positions Pomerium as a solution for enforcing policy and securing agentic access to MCP servers.
|**GitHub Repository** |**Description** |
|---------------------------------|-----------------------------------------------------------------------------|
| github/github-mcp-server | Manages GitHub issues, pull requests, discussions with identity & permissions. |
| microsoft/playwright-mcp | Triggers browser automation tasks (QA, scraping, testing). |
| awslabs/mcp | Exposes AWS documentation, billing data, and service metadata. |
| hashicorp/terraform-mcp-server | Secure access to Terraform providers and modules. |
| dbt-labs/dbt-mcp | Exposes dbt’s semantic layer and CLI commands. |
| getsentry/sentry-mcp | Access to Sentry error tracking and performance telemetry. |
| mongodb-js/mongodb-mcp-server | Interacts with MongoDB and Atlas instances securely. |
| StarRocks/mcp-server-starrocks | Brings MCP to the StarRocks SQL engine. |
| vantage-sh/vantage-mcp-server |Focuses on cloud cost visibility. |
This tutorial details how to use FastAPI-MCP to convert a FastAPI endpoint (fetching US National Park alerts) into an MCP-compatible server. It covers environment setup, app creation, testing, and MCP server implementation with Cursor IDE.
This article details a comparison between Model Context Protocol (MCP) and Function Calling, two methods for integrating Large Language Models (LLMs) with external systems. It covers their architectures, security models, scalability, and suitable use cases, highlighting the strengths and weaknesses of each approach.
MCP is best suited for robust, complex applications within secure enterprise environments, while Function Calling excels in straightforward, dynamic task execution scenarios. The choice depends on the specific needs, security requirements, scalability needs, and resource availability of the project.
This article explores the Model Context Protocol (MCP), an open protocol designed to standardize AI interaction with tools and data, addressing the fragmentation in AI agent ecosystems. It details current use cases, future possibilities, and challenges in adopting MCP.
Model Context Protocol (MCP) is a bridging technology for AI agents and APIs. It standardizes API access for AI agents, making it a universal method for AI agents to trigger external actions.
Llama Stack v0.1.0 introduces a stable API release enabling developers to build RAG applications and agents, integrate with various tools, and use telemetry for monitoring and evaluation. This release provides a comprehensive interface, rich provider ecosystem, and multiple developer interfaces, along with sample applications for Python, iOS, and Android.