klotz: ollama*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. Local Micro-Agents That Observe, Log and React. Build powerful micro-agents that observe your digital world, remember what matters, and react intelligently—all while keeping your data 100% private and secure.
  2. Learn to deploy your own local LLM service using Docker containers for maximum security and control, whether you're running on CPU, NVIDIA GPU or AMD GPU.
  3. Ollama has partnered with NVIDIA to optimize performance on the new NVIDIA DGX Spark, powered by the GB10 Grace Blackwell Superchip, enabling fast prototyping and running of local language models.
  4. An encyclopedia where everything can be an article, and every article is generated on the spot. Articles are often full of hallucinations and nonsense, especially with lower parameter models. The project uses Ollama and Go to generate content.
  5. This article details how to set up an email triage system using Home Assistant and a local Large Language Model (LLM) to summarize and categorize incoming emails, reducing inbox clutter and improving email management. It covers the setup of a REST command to interface with Ollama, the automation process, and the benefits of using a local LLM for privacy.
  6. A no-install needed web-GUI for Ollama. It provides a web-based interface for interacting with Ollama, offering features like markdown rendering, keyboard shortcuts, a model manager, offline/PWA support, and an optional API for accessing more powerful models.
  7. This article details how to set up a weather report on a Home Assistant dashboard using a local LLM (Ollama) for more user-friendly summaries and clothing suggestions, avoiding cloud-based services for privacy reasons. It covers the setup process, prompt engineering, and hardware considerations.
  8. This article details 7 lessons the author learned while self-hosting Large Language Models (LLMs), covering topics like the importance of memory bandwidth, quantization, electricity costs, hardware choices beyond Nvidia, prompt engineering, Mixture of Experts models, and starting with simpler tools like LM Studio.
  9. Learn how to run and fine-tune Mistral Devstral 1.1, including Small-2507 and 2505. This guide covers official recommended settings, tutorials for running Devstral in Ollama and llama.cpp, experimental vision support, and fine-tuning with Unsloth.
  10. Lightweight CLI agent to semantically search and ask your emails. Downloads inbox, generates embeddings using local (or external) LLMs, and stores everything in a vector database on your machine. Supports incremental sync for fast updates.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: ollama

About - Propulsed by SemanticScuttle