klotz: context*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This PR implements the StreamingLLM technique for model loaders, focusing on handling context length and optimizing chat generation speed.
    2024-11-26 Tags: , , , , , by klotz
  2. "Contextual Retrieval tackles a fundamental issue in RAG: the loss of context when documents are split into smaller chunks for processing. By adding relevant contextual information to each chunk before it's embedded or indexed, the method preserves critical details that might otherwise be lost. In practical terms, this involves using Anthropic’s Claude model to generate chunk-specific context. For instance, a simple chunk stating, “The company’s revenue grew by 3% over the previous quarter,” becomes contextualized to include additional information such as the specific company and the relevant time period. This enhanced context ensures that retrieval systems can more accurately identify and utilize the correct information."
  3. This article explains how to provide context to GitHub Copilot Chat for better code suggestions and assistance. It covers techniques like highlighting code, using slash commands, leveraging workspace information, and specifying relevant files.
  4. Mem0: The Memory Layer for Personalized AI. Provides an intelligent, adaptive memory layer for Large Language Models (LLMs), enhancing personalized AI experiences.
    2024-07-29 Tags: , , , , , by klotz
  5. Researchers from Huawei and University College London have developed a new approach called EM-LLM, which integrates aspects of human episodic memory and event cognition into large language models (LLMs). This allows LLMs to have infinite context lengths while maintaining their regular functioning.
    2024-07-16 Tags: , , by klotz
  6. Noema Research introduces Pinboard, a developer tool for improved productivity. Pinboard, a command-line tool, efficiently manages files and terminal references, enhancing development workflows. Key features include flexible pinning, contextual updates, clipboard integration, an interactive shell, and undo functionality.
    2024-07-10 Tags: , , , , , , by klotz
  7. The article proposes a new framework, LongRAG, that aims to improve the performance of Retrieval-Augmented Generation (RAG) by using long retriever and reader components. LongRAG processes Wikipedia into larger 4K-token units, reducing the total units from 22M to 600K, thus decreasing the burden on the retriever. The top-k retrieved units (≈30K tokens) are then fed to a long-context Language Model for zero-shot answer extraction. LongRAG achieves EM of 62.7% on NQ and 64.3% on HotpotQA (full-wiki), which is on par with the state-of-the-art model.
  8. Stay informed about the latest artificial intelligence (AI) terminology with this comprehensive glossary. From algorithm and AI ethics to generative AI and overfitting, learn the essential AI terms that will help you sound smart over drinks or impress in a job interview.
  9. This article explains the Long RoPE methodology used to expand the context lengths in LLMs without significant performance degradation. It discusses the importance of context length in LLMs and the limitations of previous positional encoding methods. The article then introduces Rotational Positional Encoding (RoPE) and its limitations, and explains how Long RoPE extends RoPE to larger contexts.
  10. 2023-12-13 Tags: , , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: context

About - Propulsed by SemanticScuttle