A step-by-step guide to develop a custom code-to-diagram MCP server, explaining the fundamentals of Model Context Protocol and its components with a practical example.
This article lists and ranks the top Model Context Protocol (MCP) servers on GitHub as of June 2025, highlighting their capabilities and emphasizing the importance of security when granting agents access to sensitive data. It positions Pomerium as a solution for enforcing policy and securing agentic access to MCP servers.
|**GitHub Repository** |**Description** |
|---------------------------------|-----------------------------------------------------------------------------|
| github/github-mcp-server | Manages GitHub issues, pull requests, discussions with identity & permissions. |
| microsoft/playwright-mcp | Triggers browser automation tasks (QA, scraping, testing). |
| awslabs/mcp | Exposes AWS documentation, billing data, and service metadata. |
| hashicorp/terraform-mcp-server | Secure access to Terraform providers and modules. |
| dbt-labs/dbt-mcp | Exposes dbt’s semantic layer and CLI commands. |
| getsentry/sentry-mcp | Access to Sentry error tracking and performance telemetry. |
| mongodb-js/mongodb-mcp-server | Interacts with MongoDB and Atlas instances securely. |
| StarRocks/mcp-server-starrocks | Brings MCP to the StarRocks SQL engine. |
| vantage-sh/vantage-mcp-server |Focuses on cloud cost visibility. |
This page details the DeepSeek-R1-0528-Qwen3-8B model, a quantized version of DeepSeek-R1-0528, highlighting its improved reasoning capabilities, evaluation results, usage guidelines, and licensing information. It offers various quantization options (GGUF) for local execution.
This guide highlights 10 open-source MCP (Model Context Protocol) servers to boost productivity in Cursor, covering tools for API work, web scraping, design integration, and document conversion.
| **Server Name** | **Primary Function** | **Key Features** |
|---|---|---|
| Apidog MCP Server | API Development | Syncs with API docs, natural language queries, local caching. |
| Browserbase MCP Server | Web Interaction & Automation | Cloud browser sessions, screenshots, JavaScript execution. |
| Magic MCP Server | Generative AI | Placeholder images, text transformation, code generation. |
| Opik MCP Server | Real-time Web Search | Web search integration, content summarization, source citations. |
| Figma Context MCP Server | Design-to-Code | Access Figma data, convert designs to code, analyze UI elements. |
| Pandoc MCP Server | Document Conversion | Converts between Markdown, PDF, HTML, DOCX, etc. |
| Excel MCP Server | Excel Data Access | Read/write Excel data, generate visualizations, automate reporting. |
| Mindmap MCP Server | Mindmap Integration | Import/interpret mindmaps, convert to outlines, collaborative planning. |
| Markdownify MCP Server | Content to Markdown | Converts HTML to Markdown, cleans documentation. |
| Tavily MCP Server | Curated Knowledge | High-quality knowledge retrieval, AI-friendly summaries, multi-source aggregation. |
LLM 0.26 introduces tool support, allowing LLMs to access and utilize Python functions as tools. The article details how to install, configure, and use these tools with various LLMs like OpenAI, Anthropic, Gemini, and Ollama models, including examples with plugins and ad-hoc functions. It also discusses the implications for building 'agents' and future development plans.
Google today announced that the SDK for its Gemini models will natively support the Model Context Protocol from Anthropic. This move aims to simplify the connection between AI agents and data sources, aligning with the growing popularity of MCP and complementing Google's own Agent2Agent protocol. The company also plans to ease deployment of MCP servers and hosted tools for AI agents.
This course provides an introduction to the Model Context Protocol (MCP), covering its theory, design, and practical application. It includes foundational units, hands-on exercises, use case assignments, and collaboration opportunities. The course aims to equip students with the knowledge and skills to build AI applications leveraging external data and tools using MCP standards.
Edge Delta announces its new MCP Server, an open standard for streamlining communication between AI models and external data sources. It enables intelligent telemetry data analysis, adaptive pipelines, and effortless cross-tool orchestration directly within your IDE.
Edge Delta’s MCP Server acts as a bridge between developer tools and the Edge Delta platform, enabling generative AI to be integrated into observability workflows. Key benefits include:
* **Instant Root Cause Analysis:** Quickly identify the causes of errors using logs, metrics, and probable root causes.
* **Adaptive Pipelines:** AI-driven suggestions for optimizing telemetry pipeline configurations.
* **Effortless Orchestration:** Seamless integration of Edge Delta anomalies with other tools like Slack and AWS KB.
The server is built on Go and requires minimal authentication (Org ID + API Token). It can be easily integrated into IDEs with a simple configuration. The author anticipates that, despite current limitations like context window size and latency, this technology represents a significant step forward, similar to the impact of early algorithmic breakthroughs.
This tutorial details how to implement persistent memory in Claude Desktop using a local knowledge graph. It covers installation of dependencies (Node.js and Claude Desktop), configuration of `mcp.json` and Claude settings, and how to leverage the Knowledge Graph Memory Server for personalized and consistent responses.
This article details the creation of a simple, 50-line agent using Model Context Protocol (MCP) and Hugging Face's tools, demonstrating how easily agents can be built with modern LLMs that support function/tool calling.
1. **MCP Overview**: MCP is a standard API for exposing tools that can be integrated with Large Language Models (LLMs).
2. **Implementation**: The author explains how to implement a MCP client using TypeScript and the Hugging Face Inference Client. This client connects to MCP servers, retrieves tools, and integrates them into LLM inference.
3. **Tools**: Tools are defined with a name, description, and parameters, and are passed to the LLM for function calling.
4. **Agent Design**: An agent is essentially a while loop that alternates between tool calling and feeding tool results back into the LLM until a specific condition is met, such as two consecutive non-tool messages.
5. **Code Example**: The article provides a concise 50-line TypeScript implementation of an agent, demonstrating the simplicity and power of MCP.
6. **Future Directions**: The author suggests experimenting with different models and inference providers, as well as integrating local LLMs using frameworks like llama.cpp or LM Studio.