A practical pipeline for classifying messy free-text data into meaningful categories using a locally hosted LLM, no labeled training data required.
Learn how to label text without the need for task-specific training data by using zero-shot text classification. This guide explains how pretrained transformer models, such as BART, reframe classification as a reasoning task where labels are treated as natural language statements.
Key topics include:
* The core concept of zero-shot classification and its advantages for rapid prototyping.
* Using the Hugging Face transformers pipeline with the facebook/bart-large-mnli model.
* Implementing multi-label classification for texts belonging to multiple categories.
* Improving accuracy through custom hypothesis template tuning and clear label wording.
This is an open, unconventional textbook covering mathematics, computing, and artificial intelligence from foundational principles. It's designed for practitioners seeking a deep understanding, moving beyond exam preparation and focusing on real-world application. The author, drawing from years of experience in AI/ML, has compiled notes that prioritize intuition, context, and clear explanations, avoiding dense notation and outdated material.
The compendium covers a broad range of topics, from vectors and matrices to machine learning, computer vision, and multimodal learning, with future chapters planned for areas like data structures and AI inference.
A curated reading list for those starting to learn about Large Language Models (LLMs), covering foundational concepts, practical applications, and future trends, updated for 2026.
This repository provides tutorials and implementations for various Generative AI Agent techniques, from basic to advanced. It serves as a comprehensive guide for building intelligent, interactive AI systems.
This article details research into finding the optimal architecture for small language models (70M parameters), exploring depth-width tradeoffs, comparing different architectures, and introducing Dhara-70M, a diffusion model offering 3.8x faster throughput with improved factuality.
This article explores different chunking strategies for Retrieval-Augmented Generation (RAG) systems, comparing nine approaches using the agenticmemory library to improve retrieval accuracy and reduce hallucinations.
This page details the command-line utility for the Embedding Atlas, a tool for exploring large text datasets with metadata. It covers installation, data loading (local and Hugging Face), visualization of embeddings using SentenceTransformers and UMAP, and usage instructions with available options.
Python tutorial for reproducible labeling of cutting-edge topic models with GPT4-o-mini. The article details training a FASTopic model and labeling its results using GPT-4.0 mini, emphasizing reproducibility and control over the labeling process.
Multi-class zero-shot embedding classification and error checking. This project improves zero-shot image/text classification using a novel dimensionality reduction technique and pairwise comparison, resulting in increased agreement between text and image classifications.