Tags: text* + llm*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. This repository provides tutorials and implementations for various Generative AI Agent techniques, from basic to advanced. It serves as a comprehensive guide for building intelligent, interactive AI systems.
  2. This article details research into finding the optimal architecture for small language models (70M parameters), exploring depth-width tradeoffs, comparing different architectures, and introducing Dhara-70M, a diffusion model offering 3.8x faster throughput with improved factuality.
  3. This article explores different chunking strategies for Retrieval-Augmented Generation (RAG) systems, comparing nine approaches using the agenticmemory library to improve retrieval accuracy and reduce hallucinations.
  4. A recent study shows that one large language model (LLM) demonstrates impressive linguistic analysis abilities, rivaling those of human linguistics graduate students. Researchers tested LLMs on complex linguistic tasks, including recursion and phonological rule inference, revealing that OpenAI’s o1 model performed significantly better than others, challenging conventional views on the limits of AI in understanding language.
  5. Python tutorial for reproducible labeling of cutting-edge topic models with GPT4-o-mini. The article details training a FASTopic model and labeling its results using GPT-4.0 mini, emphasizing reproducibility and control over the labeling process.
  6. Minuet: Dance with LLM in Your Code
    2025-06-15 Tags: , , , by klotz
  7. A post with pithy observations and clear conclusions from building complex LLM workflows, covering topics like prompt chaining, data structuring, model limitations, and fine-tuning strategies.
  8. This article details the often overlooked cost of storing embeddings for RAG systems, and how quantization techniques (int8 and binary) can significantly reduce storage requirements and improve retrieval speed without substantial accuracy loss.
  9. This Space demonstrates a simple method for embedding text using a LLM (Large Language Model) via the Hugging Face Inference API. It showcases how to convert text into numerical vector representations, useful for semantic search and similarity comparisons.
  10. This study demonstrates that neural activity in the human brain aligns linearly with the internal contextual embeddings of speech and language within large language models (LLMs) as they process everyday conversations.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "text+llm"

About - Propulsed by SemanticScuttle