klotz: local llm* + home assistant*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This article details how to run a 120B parameter LLM locally with 24GB of VRAM and 64GB of system RAM, using a setup with Proxmox LXCs, Whisper for voice transcription, and integration with Home Assistant for smart home automation.
  2. This article details how to set up a custom voice pipeline in Home Assistant using free self-hosted tools like Whisper and Piper, replacing cloud-based services for full control over speech-to-text and text-to-speech processing.
  3. The series of articles by Adam Conway discusses how the author replaced cloud-based smart assistants like Alexa with a local large language model (LLM) integrated into Home Assistant, enabling more complex and private home automations.

    1. **Use a Local LLM**: Set up an LLM (like Qwen) locally using tools such as Ollama and OpenWeb UI.
    2. **Integrate with Home Assistant**:
    - Enable Ollama integration in Home Assistant.
    - Configure the IP and port of the LLM server.
    - Select the desired model for use within Home Assistant.
    3. **Voice Processing Tools**:
    - Use **Whisper** for speech-to-text transcription.
    - Use **Piper** for text-to-speech synthesis.
    4. **Smart Home Automation**:
    - Automate complex tasks like turning off lights and smart plugs with voice commands.
    - Use data from IP cameras (via Frigate) to control external lighting based on presence.
    5. **Hardware Recommendations**:
    - Use Home Assistant Voice Preview speaker or DIY alternatives using ESP32 or repurposed microphones.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: local llm + home assistant

About - Propulsed by SemanticScuttle