klotz: context*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This blog post explains that Large Language Models (LLMs) don't need to understand the Model Context Protocol (MCP) to utilize tools. MCP standardizes tool calling, simplifying agent development for developers while the LLM simply generates tool call suggestions based on provided definitions. The article details tool calling, MCP's function, and how it relates to context engineering.
    2025-08-07 Tags: , , , , , , by klotz
  2. This article discusses the importance of knowledge graphs in providing context for AI agents, highlighting their advantages over traditional retrieval systems in terms of precision, reasoning, and explainability.
  3. >"This document provides a comprehensive overview of the engineering repository, which implements a systematic approach to context engineering for Large Language Models (LLMs). The repository bridges theoretical foundations with practical implementations, using a biological metaphor to organize concepts from simple prompts to complex neural field systems."
    2025-07-01 Tags: , by klotz
  4. LLM 0.24 introduces fragments and template plugins to better utilize long context models, improving storage efficiency and enabling new features like querying logs by fragment and leveraging documentation. It also details improvements to template handling and model support.
    2025-04-08 Tags: , , by klotz
  5. Qwen2.5-1M models and inference framework support for long-context tasks, with a context length of up to 1M tokens.
    2025-01-27 Tags: , , , , by klotz
  6. This PR implements the StreamingLLM technique for model loaders, focusing on handling context length and optimizing chat generation speed.
    2024-11-26 Tags: , , , , , by klotz
  7. "Contextual Retrieval tackles a fundamental issue in RAG: the loss of context when documents are split into smaller chunks for processing. By adding relevant contextual information to each chunk before it's embedded or indexed, the method preserves critical details that might otherwise be lost. In practical terms, this involves using Anthropic’s Claude model to generate chunk-specific context. For instance, a simple chunk stating, “The company’s revenue grew by 3% over the previous quarter,” becomes contextualized to include additional information such as the specific company and the relevant time period. This enhanced context ensures that retrieval systems can more accurately identify and utilize the correct information."
  8. This article explains how to provide context to GitHub Copilot Chat for better code suggestions and assistance. It covers techniques like highlighting code, using slash commands, leveraging workspace information, and specifying relevant files.
  9. Mem0: The Memory Layer for Personalized AI. Provides an intelligent, adaptive memory layer for Large Language Models (LLMs), enhancing personalized AI experiences.
    2024-07-29 Tags: , , , , , by klotz
  10. Researchers from Huawei and University College London have developed a new approach called EM-LLM, which integrates aspects of human episodic memory and event cognition into large language models (LLMs). This allows LLMs to have infinite context lengths while maintaining their regular functioning.
    2024-07-16 Tags: , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: context

About - Propulsed by SemanticScuttle