Tags: large language models* + natural language processing*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. The article provides a comprehensive introduction to large language models (LLMs), explaining their purpose, how they function, and their applications. It covers various types of LLMs, including general-purpose and task-specific models, and discusses the distinction between closed-source and open-source LLMs. The article also explores the ethical considerations of building and using LLMs and the future possibilities for these models.
  2. This paper presents a detailed vocabulary of 33 terms and a taxonomy of 58 LLM prompting techniques, along with guidelines for prompt engineering and a meta-analysis of natural language prefix-prompting, serving as the most comprehensive survey on prompt engineering to date.
  3. An article discussing the use of embeddings in natural language processing, focusing on comparing open source and closed source embedding models for semantic search, including techniques like clustering and re-ranking.
  4. This blog post explores applying the original ELIZA chatbot, a pioneering natural language processing program, in a way similar to modern large language models (LLMs) by using it to carry on an educational conversation about George Orwell's 'Animal Farm'.
  5. This article discusses Re2, a prompting technique that enhances reasoning in Large Language Models (LLMs) by re-reading the input twice. It improves understanding and reasoning capabilities, leading to better performance in various benchmarks.
  6. This repository showcases various advanced techniques for Retrieval-Augmented Generation (RAG) systems. RAG systems combine information retrieval with generative models to provide accurate and contextually rich responses.
  7. This article provides a step-by-step guide on fine-tuning the Llama 3 language model for customer service use cases. It covers the process of data preparation, fine-tuning techniques, and the benefits of leveraging LLMs in customer service applications.
  8. A method that uses instruction tuning to adapt LLMs for knowledge-intensive tasks. RankRAG simultaneously trains the models for context ranking and answer generation, enhancing their retrieval-augmented generation (RAG) capabilities.
  9. NVIDIA and Georgia Tech researchers introduce RankRAG, a novel framework instruction-tuning a single LLM for top-k context ranking and answer generation. Aiming to improve RAG systems, it enhances context relevance assessment and answer generation.
  10. This guide explains how to build and use knowledge graphs with R2R. It covers setup, basic example, construction, navigation, querying, visualization, and advanced examples.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "large language models+natural language processing"

About - Propulsed by SemanticScuttle