klotz: rag* + llm*

Bookmarks on this page are managed by an admin user.

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. Case study on measuring context relevance in retrieval-augmented generation systems using Ragas, TruLens, and DeepEval. Develop practical strategies to evaluate the accuracy and relevance of generated context.
  2. Simon Willison explains an accidental prompt injection attack on RAG applications, caused by concatenating user questions with documentation fragments in a Retrieval Augmented Generation (RAG) system.
    2024-06-06 Tags: , , , by klotz
  3. The technology of retrieval-augmented generation, or RAG, could be pivotal in shaping the battle between large language models.
  4. This article discusses the integration of Large Language Models (LLMs) into Vespa, a full-featured search engine and vector database. It explores the benefits of using LLMs for Retrieval-augmented Generation (RAG), demonstrating how Vespa can efficiently retrieve the most relevant data and enrich responses with up-to-date information.
  5. This article discusses GNN-RAG, a new AI method that combines the language understanding abilities of LLMs with the reasoning abilities of GNNs for Retrieval-Augmented Generation (RAG) style. This approach improves KGQA performance by utilizing GNNs for retrieval and RAG for reasoning.
  6. An article discussing a paper that proposes a new framework, MetRag, for retrieval augmented generation. The framework is designed to improve the performance of large language models in knowledge-intensive tasks.
  7. In this tutorial, we will build a RAG system with a self-querying retriever in the LangChain framework. This will enable us to filter the retrieved movies using metadata, thus providing more meaningful movie recommendations.
  8. This article discusses Retrieval-Augmented Generation (RAG) models, a new approach that addresses the limitations of traditional models in knowledge-intensive Natural Language Processing (NLP) tasks. RAG models combine parametric memory from pre-trained seq2seq models with non-parametric memory from a dense vector index of Wikipedia, enabling dynamic knowledge access and integration.
  9. Verba is an open-source application designed to offer an end-to-end, streamlined, and user-friendly interface for Retrieval-Augmented Generation (RAG) out of the box. It supports various RAG techniques, data types, LLM providers, and offers Docker support and a fully-customizable frontend.
  10. This is a local LLM chatbot project with RAG for processing PDF input files
    2024-05-17 Tags: , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: rag + llm

About - Propulsed by SemanticScuttle