This article discusses Model Context Protocol (MCP), an open standard designed to connect AI agents with tools and data. It details the key components of MCP, its benefits (improved interoperability, future-proofing, and modularity), and its adoption in open-source agent frameworks like LangChain, CrewAI, and AutoGen. It also includes case studies of MCP implementation at Block and in developer tools.
This document details the features, best practices, and migration guidance for GPT-5, OpenAI's most intelligent model. It covers new API features like minimal reasoning effort, verbosity control, custom tools, and allowed tools, along with prompting guidance and migration strategies from older models and APIs.
This blog post explains that Large Language Models (LLMs) don't need to understand the Model Context Protocol (MCP) to utilize tools. MCP standardizes tool calling, simplifying agent development for developers while the LLM simply generates tool call suggestions based on provided definitions. The article details tool calling, MCP's function, and how it relates to context engineering.
A detailed blog post discussing OpenAI's newly released open-weight GPT models, including performance benchmarks, initial testing on various hardware (Mac laptops, Cerebras), and comparisons to other open-source models. It covers aspects like reasoning capabilities, tool calling, and the new OpenAI Harmony prompt format.
The official Python SDK for Model Context Protocol servers and clients. It allows building MCP clients, servers, and provides tools for interacting with LLMs in a standardized way.
An MCP server that gives language models temporal awareness and time calculation abilities. Teaching AI the significance of the passage of time through collaborative tool development.
The Universal Tool Calling Protocol (UTCP) is an open standard that describes how to call existing tools directly, eliminating the need for wrappers. It focuses on direct communication with tool endpoints (HTTP, gRPC, WebSocket, CLI, etc.) to reduce latency and maintain existing security and billing systems.
LLM 0.26 introduces tool support, allowing LLMs to access and utilize Python functions as tools. The article details how to install, configure, and use these tools with various LLMs like OpenAI, Anthropic, Gemini, and Ollama models, including examples with plugins and ad-hoc functions. It also discusses the implications for building 'agents' and future development plans.
A summary of a workshop presented at PyCon US on building software with LLMs, covering setup, prompting, building tools (text-to-SQL, structured data extraction, semantic search/RAG), tool usage, and security considerations like prompt injection. It also discusses the current LLM landscape, including models from OpenAI, Gemini, Anthropic, and open-weight alternatives.
Guidance on choosing the best AI model for GitHub Copilot projects, considering speed, depth, cost, and task complexity. Models discussed include GPT-4.1, GPT-4o, Claude 3.5 Sonnet, o4-mini, o3, Gemini 2.0 Flash, and GPT-4.5.