Tags: self-hosted* + llm*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. The awesome collection of OpenClaw Skills. Formerly known as Moltbot, originally Clawdbot.
  2. Moltbot is a self-hosted AI assistant that runs on your machines, connects to messaging platforms, performs actions, and maintains persistent memory. It was renamed from Clawdbot due to trademark concerns.



    * **What it is:** Moltbot is an AI assistant designed to run locally on your machines (macOS, Windows, Linux) offering privacy and customization. It differs from cloud-based services.
    * **How it works:** It connects to various messaging platforms (WhatsApp, Telegram, Slack, etc.) allowing interaction via chat.
    * **Capabilities:** Moltbot can perform actions beyond answering questions – automating tasks, running scripts, scheduling jobs, browsing the web, and integrating with other services via plugins.
    * **Key Feature: Persistent Memory:** Unlike many bots, Moltbot remembers past interactions, providing a tailored and consistent experience.
    * **Name Change:** The project was renamed from Clawdbot to Moltbot due to trademark concerns with Anthropic’s Claude.
  3. This article details how to combine Clawdbot with Docker Model Runner (DMR) to build a privacy-focused, high-performance personal AI assistant with full control over data and costs. It covers configuration, benefits, recommended models, and how to get involved in the ecosystem.
  4. This article details how to set up a custom voice pipeline in Home Assistant using free self-hosted tools like Whisper and Piper, replacing cloud-based services for full control over speech-to-text and text-to-speech processing.
  5. A tutorial on building a private, offline Retrieval Augmented Generation (RAG) system using Ollama for embeddings and language generation, and FAISS for vector storage, ensuring data privacy and control.

    1. **Document Loader:** Extracts text from various file formats (PDF, Markdown, HTML) while preserving metadata like source and page numbers for accurate citations.
    2. **Text Chunker:** Splits documents into smaller text segments (chunks) to manage token limits and improve retrieval accuracy. It uses overlapping and sentence boundary detection to maintain context.
    3. **Embedder:** Converts text chunks into numerical vectors (embeddings) using the `nomic-embed-text` model via Ollama, which runs locally without internet access.
    4. **Vector Database:** Stores the embeddings using FAISS (Facebook AI Similarity Search) for fast similarity search. It uses cosine similarity for accurate retrieval and saves the database to disk for quick loading in future sessions.
    5. **Large Language Model (LLM):** Generates answers using the `llama3.2` model via Ollama, also running locally. It takes the retrieved context and the user's question to produce a response with citations.
    6. **RAG System Orchestrator:** Coordinates the entire workflow, managing the ingestion of documents (loading, chunking, embedding, storing) and the querying process (retrieving relevant chunks, generating answers).
  6. DispatchMail is an open source locally run (though currently using OpenAI for queries) AI-powered email assistant that helps you manage your inbox. It monitors your email, processes it with an AI agent based on your prompts, and provides a (locally run) web interface for managing drafts/responses, and instructions.
  7. This article details how to enhance the Paperless-ngx document management system by integrating a local Large Language Model (LLM) like Ollama. It covers the setup process, including installing Docker, Ollama, and configuring Paperless AI, to enable AI-powered features such as improved search and document understanding.
  8. The author details their transition from Google Home to Home Assistant with a local LLM, highlighting the benefits of increased control, customization, and functionality. They discuss using Home Assistant's 'Okay Nabu' voice control, experimenting with display solutions like tablets and ESP32 devices, and the advantages of integrating a local LLM for more contextual and powerful interactions.
  9. A summary of a workshop presented at PyCon US on building software with LLMs, covering setup, prompting, building tools (text-to-SQL, structured data extraction, semantic search/RAG), tool usage, and security considerations like prompt injection. It also discusses the current LLM landscape, including models from OpenAI, Gemini, Anthropic, and open-weight alternatives.
  10. This article details a method for converting PDFs to Markdown using a local LLM (Gemma 3 via Ollama), focusing on privacy and efficiency. It involves rendering PDF pages as images and then using the LLM for content extraction, even from scanned PDFs.
    2025-04-16 Tags: , , , , , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "self-hosted+llm"

About - Propulsed by SemanticScuttle