Tags: rag*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. Articles on Large Language Models, including RAG, Jupyter integration, complexity and pricing, and more.

  2. Ryan speaks with Edo Liberty, Founder and CEO of Pinecone, about building vector databases, the power of embeddings, the evolution of RAG, and fine-tuning AI models.

  3. This article details how to automate embedding generation and updates in Postgres using Supabase Vector, Queues, Cron, and pg_net extension with Edge Functions, addressing the issues of drift, latency, and complexity found in traditional external embedding pipelines.

  4. This repository organizes public content to train an LLM to answer questions and generate summaries in an author's voice, focusing on the content of 'virtual_adrianco' but designed to be extensible to other authors.

  5. This article introduces the pyramid search approach using Agentic Knowledge Distillation to address the limitations of traditional RAG strategies in document ingestion.

    The pyramid structure allows for multi-level retrieval, including atomic insights, concepts, abstracts, and recollections. This structure mimics a knowledge graph but uses natural language, making it more efficient for LLMs to interact with.

    Knowledge Distillation Process:

    • Conversion to Markdown: Documents are converted to Markdown for better token efficiency and processing.
    • Atomic Insights Extraction: Each page is processed using a two-page sliding window to generate a list of insights in simple sentences.
    • Concept Distillation: Higher-level concepts are identified from the insights to reduce noise and preserve essential information.
    • Abstract Creation: An LLM writes a comprehensive abstract for each document, capturing dense information efficiently.
    • Recollections/Memories: Critical information useful across all tasks is stored at the top of the pyramid.
  6. A 100-line minimalist LLM framework for Agents, Task Decomposition, RAG, etc. It models the LLM workflow as a Graph + Shared Store with nodes handling simple tasks, connected through actions for agents, and orchestrated by flows for task decomposition.

  7. Minimalist LLM Framework in 100 Lines. Enable LLMs to Program Themselves.

    2025-03-04 Tags: , , , , , by klotz
  8. A guide on implementing prompt engineering patterns to make RAG implementations more effective and efficient, covering patterns like Direct Retrieval, Chain of Thought, Context Enrichment, Instruction-Tuning, and more.

    2025-02-27 Tags: , , by klotz
  9. The article explains six essential strategies for customizing Large Language Models (LLMs) to better meet specific business needs or domain requirements. These strategies include Prompt Engineering, Decoding and Sampling Strategy, Retrieval Augmented Generation (RAG), Agent, Fine-Tuning, and Reinforcement Learning from Human Feedback (RLHF). Each strategy is described with its benefits, limitations, and implementation approaches to align LLMs with specific objectives.

    2025-02-25 Tags: , , , , , by klotz
  10. This article explores the use of Google's NotebookLM (NLM) as a tool for research, particularly in analyzing the impact of the Aswan High Dam on schistosomiasis in Egypt. The author details how NLM can be used to create a research assistant-like experience, allowing users to 'have a conversation' with uploaded content to gain insights and answers from the material.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "rag"

About - Propulsed by SemanticScuttle