klotz: localllama* + self-hosted*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This article details how to enhance the Paperless-ngx document management system by integrating a local Large Language Model (LLM) like Ollama. It covers the setup process, including installing Docker, Ollama, and configuring Paperless AI, to enable AI-powered features such as improved search and document understanding.
  2. Discussion in r/LocalLLaMA about finding a self-hosted, local RAG (Retrieval Augmented Generation) solution for large language models, allowing users to experiment with different prompts, models, and retrieval rankings. Various tools and resources are suggested, such as Open-WebUI, kotaemon, and tldw.
    2024-10-13 Tags: , , , , by klotz
  3. llm-tool provides a command-line utility for running large language models locally. It includes scripts for pulling models from the internet, starting them, and managing them using various commands such as 'run', 'ps', 'kill', 'rm', and 'pull'. Additionally, it offers a Python script named 'querylocal.py' for querying these models. The repository also come
  4. 2023-12-31 Tags: , , , by klotz
  5. 2023-11-27 Tags: , , , , by klotz
  6. 2023-06-12 Tags: , , , , by klotz
  7. 2023-06-12 Tags: , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: localllama + self-hosted

About - Propulsed by SemanticScuttle