Tags: self-hosted* + llm*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. DispatchMail is an open source locally run (though currently using OpenAI for queries) AI-powered email assistant that helps you manage your inbox. It monitors your email, processes it with an AI agent based on your prompts, and provides a (locally run) web interface for managing drafts/responses, and instructions.
  2. This article details how to enhance the Paperless-ngx document management system by integrating a local Large Language Model (LLM) like Ollama. It covers the setup process, including installing Docker, Ollama, and configuring Paperless AI, to enable AI-powered features such as improved search and document understanding.
  3. The author details their transition from Google Home to Home Assistant with a local LLM, highlighting the benefits of increased control, customization, and functionality. They discuss using Home Assistant's 'Okay Nabu' voice control, experimenting with display solutions like tablets and ESP32 devices, and the advantages of integrating a local LLM for more contextual and powerful interactions.
  4. A summary of a workshop presented at PyCon US on building software with LLMs, covering setup, prompting, building tools (text-to-SQL, structured data extraction, semantic search/RAG), tool usage, and security considerations like prompt injection. It also discusses the current LLM landscape, including models from OpenAI, Gemini, Anthropic, and open-weight alternatives.
  5. This article details a method for converting PDFs to Markdown using a local LLM (Gemma 3 via Ollama), focusing on privacy and efficiency. It involves rendering PDF pages as images and then using the LLM for content extraction, even from scanned PDFs.
    2025-04-16 Tags: , , , , , , , , by klotz
  6. GitLab 17.9 introduces support for self-hosted AI platforms, allowing organizations to deploy large language models within their infrastructure. This enhances data security, compliance, and performance for industries with strict regulatory requirements.
    2025-03-11 Tags: , , , , by klotz
  7. Msty offers a simple and powerful interface to work with local and online AI models without the hassle of setup or configuration, ensuring privacy and reliability with offline capabilities.
    2025-06-01 Tags: , , by klotz
  8. Discover how to run AI models locally with ease using tools like Msty, which simplifies the process of setting up, running, and managing local AI models on various operating systems.
    2025-01-08 Tags: , , , , by klotz
  9. Persys is a locally-run device designed to function as a second brain. The repository includes the backend server (Linux only) and the Electron-based desktop application for accessing the server.
  10. A comparison of frameworks, models, and costs for deploying Llama models locally and privately.

    - Four tools were analyzed: HuggingFace, vLLM, Ollama, and llama.cpp.
    - HuggingFace has a wide range of models but struggles with quantized models.
    - vLLM is experimental and lacks full support for quantized models.
    - Ollama is user-friendly but has some customization limitations.
    - llama.cpp is preferred for its performance and customization options.
    - The analysis focused on llama.cpp and Ollama, comparing speed and power consumption across different quantizations.
    2024-11-03 Tags: , , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "self-hosted+llm"

About - Propulsed by SemanticScuttle