klotz: llm* + rag*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This article explains how to use Large Language Models (LLMs) to perform document chunking, dividing a document into blocks of text that each express a unified concept or 'idea', to create a knowledge base with independent elements.
  2. The article explores how smaller language models like the Meta 1 Billion model can be used for efficient summarization and indexing of large documents, improving the performance and scalability of Retrieval-Augmented Generation (RAG) systems.
    2024-10-19 Tags: , , , , by klotz
  3. The article discusses the misconception that integrating complex graph databases (DBs), query languages (QLs), and analytics tools are necessary for Graph RAG. It emphasizes the distinction between traditional graph use cases and generative AI applications, and the need for a simpler tech stack.
    2024-10-19 Tags: , , , by klotz
  4. Discussion in r/LocalLLaMA about finding a self-hosted, local RAG (Retrieval Augmented Generation) solution for large language models, allowing users to experiment with different prompts, models, and retrieval rankings. Various tools and resources are suggested, such as Open-WebUI, kotaemon, and tldw.
    2024-10-13 Tags: , , , , by klotz
  5. This article discusses the importance of determining user query intent to enhance search results. It covers how to identify search and answer intents, implement intent detection using language models, and adjust retrieval strategies accordingly.
    2024-10-13 Tags: , , , by klotz
  6. Researchers from Cornell University developed a technique called 'contextual document embeddings' to improve the performance of Retrieval-Augmented Generation (RAG) systems, enhancing the retrieval of relevant documents by making embedding models more context-aware.

    Standard methods like bi-encoders often fail to account for context-specific details, leading to poor performance in application-specific datasets. Contextual document embeddings address this by enhancing the sensitivity of the embedding model to subtle differences in documents, particularly in specialized domains.

    The researchers proposed two complementary methods to improve bi-encoders:

    - Modifying the training process using contrastive learning to distinguish between similar documents.
    - Modifying the bi-encoder architecture to incorporate corpus context during the embedding process.

    These modifications allow the model to capture both the general context and specific details of documents, leading to better performance, especially in out-of-domain scenarios. The new technique has shown consistent improvements over standard bi-encoders and can be adapted for various applications beyond text-based models.
    2024-10-10 Tags: , , , by klotz
  7. This article discusses the importance of real-time access for Retrieval Augmented Generation (RAG) and how Redis can enable this through its real-time vector database, semantic cache, and LLM memory capabilities, leading to faster and more accurate responses in GenAI applications.
  8. Dr. Leon Eversberg explains how to improve the retrieval step in RAG pipelines using the HyDE technique, making LLMs more effective in accessing external knowledge through documents.
    2024-10-05 Tags: , , , by klotz
  9. Foundational concepts, practical implementation of semantic search, and the workflow of RAG, highlighting its advantages and versatile applications.

    The article provides a step-by-step guide to implementing a basic semantic search using TF-IDF and cosine similarity. This includes preprocessing steps, converting text to embeddings, and searching for relevant documents based on query similarity.
    2024-10-04 Tags: , , , , , by klotz
  10. This page provides documentation for the rerank API, including endpoints, request parameters, and response formats.
    2024-09-28 Tags: , , , , , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: llm + rag

About - Propulsed by SemanticScuttle