Extracting structured information effectively and accurately from long unstructured text with LangExtract and LLMs. This article explores Google’s LangExtract framework and its open-source LLM, Gemma 3, demonstrating how to parse an insurance policy to surface details like exclusions.
This article explores alternatives to NotebookLM, a Google assistant for synthesizing information from documents. It details NousWise, ElevenLabs, NoteGPT, Notion, Evernote, and Obsidian, outlining their key features, limitations, and considerations for choosing the right tool.
This article details how to accelerate deep learning and LLM inference using Apache Spark, focusing on distributed inference strategies. It covers basic deployment with `predict_batch_udf`, advanced deployment with inference servers like NVIDIA Triton and vLLM, and deployment on cloud platforms like Databricks and Dataproc. It also provides guidance on resource management and configuration for optimal performance.
NVIDIA DGX Spark is a desktop-friendly AI supercomputer powered by the NVIDIA GB10 Grace Blackwell Superchip, delivering 1000 AI TOPS of performance with 128GB of memory. It is designed for prototyping, fine-tuning, and inference of large AI models.
A comprehensive guide to Large Language Models by Damien Benveniste, covering various aspects from transformer architectures to deploying LLMs.
- Language Models Before Transformers
- Attention Is All You Need: The Original Transformer Architecture
- A More Modern Approach To The Transformer Architecture
- Multi-modal Large Language Models
- Transformers Beyond Language Models
- Non-Transformer Language Models
- How LLMs Generate Text
- From Words To Tokens
- Training LLMs to Follow Instructions
- Scaling Model Training
- Fine-Tuning LLMs
- Deploying LLMs
Get smarter about AI in 5 minutes. The most important AI, ML, and data science news in a free daily email.