This article explores strategies for effectively curating and managing the context that powers AI agents, discussing the shift from prompt engineering to context engineering and techniques for optimizing context usage in LLMs.
This article discusses the concept of 'tool masking' as a way to optimize the interaction between LLMs and APIs, arguing that simply exposing all API functionality (as done by MCP) is inefficient and degrades performance. It proposes shaping the tool surface to match the specific use case, improving accuracy, cost, and latency.
This article details new prompting techniques for ChatGPT-4.1, emphasizing structured prompts, precise delimiting, agent creation, long context handling, and chain-of-thought prompting to achieve better results.
This article introduces a practical agent-engineering framework for the development of AI agents, focusing on the key ideas and precepts within the large language model (LLM) context.