klotz: llama*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. A discussion post on Reddit's LocalLLaMA subreddit about logging the output of running models and monitoring performance, specifically for debugging errors, warnings, and performance analysis. The post also mentions the need for flags to output logs as flat files, GPU metrics (GPU utilization, RAM usage, TensorCore usage, etc.) for troubleshooting and analytics.
  2. This article provides a beginner-friendly introduction to Large Language Models (LLMs) and explains the key concepts in a clear and organized way.
    2024-05-10 Tags: , , , , , by klotz
  3. - 14 free colab notebooks providing hands-on experience in fine-tuning large language models (LLMs).
    - The notebooks cover topics from efficient training methodologies like LoRA and Hugging Face to specialized models such as Llama, Guanaco, and Falcon.
    - They also include advanced techniques like PEFT Finetune, Bloom-560m-tagger, and Meta_OPT-6–1b_Model.
  4. 2024-01-28 Tags: , , , , by klotz
  5. 2024-01-14 Tags: , , by klotz
  6. deploy and run LLM (large language models), including LLaMA, LLaMA2, Phi-2, Mixtral-MOE, and mamba-gpt, on the Raspberry Pi 5 8GB.
    2024-01-10 Tags: , , , , , , by klotz
  7. 2023-12-04 Tags: , , , , , by klotz
  8. 2023-12-01 Tags: , , by klotz
  9. 2023-11-12 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: llama

About - Propulsed by SemanticScuttle