klotz: context*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. mcp-cli is a lightweight CLI that enables dynamic discovery of MCP servers, reducing token consumption and making tool interactions more efficient for AI coding agents.
    2026-01-09 Tags: , , , , , , by klotz
  2. Python implementation of Recursive Language Models for processing unbounded context lengths. Process 100k+ tokens with any LLM by storing context as variables instead of prompts.
  3. This blog post explains that Large Language Models (LLMs) don't need to understand the Model Context Protocol (MCP) to utilize tools. MCP standardizes tool calling, simplifying agent development for developers while the LLM simply generates tool call suggestions based on provided definitions. The article details tool calling, MCP's function, and how it relates to context engineering.
    2025-08-07 Tags: , , , , , , by klotz
  4. This article discusses the importance of knowledge graphs in providing context for AI agents, highlighting their advantages over traditional retrieval systems in terms of precision, reasoning, and explainability.
  5. >"This document provides a comprehensive overview of the engineering repository, which implements a systematic approach to context engineering for Large Language Models (LLMs). The repository bridges theoretical foundations with practical implementations, using a biological metaphor to organize concepts from simple prompts to complex neural field systems."
    2025-07-01 Tags: , by klotz
  6. LLM 0.24 introduces fragments and template plugins to better utilize long context models, improving storage efficiency and enabling new features like querying logs by fragment and leveraging documentation. It also details improvements to template handling and model support.
    2025-04-08 Tags: , , by klotz
  7. Qwen2.5-1M models and inference framework support for long-context tasks, with a context length of up to 1M tokens.
    2025-01-27 Tags: , , , , by klotz
  8. This PR implements the StreamingLLM technique for model loaders, focusing on handling context length and optimizing chat generation speed.
    2024-11-26 Tags: , , , , , by klotz
  9. "Contextual Retrieval tackles a fundamental issue in RAG: the loss of context when documents are split into smaller chunks for processing. By adding relevant contextual information to each chunk before it's embedded or indexed, the method preserves critical details that might otherwise be lost. In practical terms, this involves using Anthropic’s Claude model to generate chunk-specific context. For instance, a simple chunk stating, “The company’s revenue grew by 3% over the previous quarter,” becomes contextualized to include additional information such as the specific company and the relevant time period. This enhanced context ensures that retrieval systems can more accurately identify and utilize the correct information."
  10. This article explains how to provide context to GitHub Copilot Chat for better code suggestions and assistance. It covers techniques like highlighting code, using slash commands, leveraging workspace information, and specifying relevant files.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: context

About - Propulsed by SemanticScuttle