A guided series of tutorials/notebooks to build a PDF to Podcast workflow using Llama models for text processing, transcript writing, dramatization, and text-to-speech conversion.
Simon Willison explains how to use the mistral.rs library in Rust to run the Llama Vision model on a Mac M2 laptop. He provides a detailed example and discusses the memory usage and GPU utilization.
This article compares the performance of smaller language models Gemma, Llama 3, and Mistral on reading comprehension tasks. The author highlights the trend of smaller, more accessible models and discusses Apple's recent foray into the field with its own proprietary model.
This blog post benchmarks and compares the performance of SGLang, TensorRT-LLM, and vLLM for serving large language models (LLMs). SGLang demonstrates superior or competitive performance in offline and online scenarios, often outperforming vLLM and matching or exceeding TensorRT-LLM.
A discussion post on Reddit's LocalLLaMA subreddit about logging the output of running models and monitoring performance, specifically for debugging errors, warnings, and performance analysis. The post also mentions the need for flags to output logs as flat files, GPU metrics (GPU utilization, RAM usage, TensorCore usage, etc.) for troubleshooting and analytics.
This article provides a beginner-friendly introduction to Large Language Models (LLMs) and explains the key concepts in a clear and organized way.
- 14 free colab notebooks providing hands-on experience in fine-tuning large language models (LLMs).
- The notebooks cover topics from efficient training methodologies like LoRA and Hugging Face to specialized models such as Llama, Guanaco, and Falcon.
- They also include advanced techniques like PEFT Finetune, Bloom-560m-tagger, and Meta_OPT-6–1b_Model.