klotz: voice assistant* + local llm*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. The article discusses the increasing usefulness of running AI models locally, highlighting benefits like latency, privacy, cost, and control. It explores practical applications such as data processing, note-taking, voice assistance, and self-sufficiency, while acknowledging the limitations compared to cloud-based models.
  2. This article details how to set up a custom voice pipeline in Home Assistant using free self-hosted tools like Whisper and Piper, replacing cloud-based services for full control over speech-to-text and text-to-speech processing.
  3. The series of articles by Adam Conway discusses how the author replaced cloud-based smart assistants like Alexa with a local large language model (LLM) integrated into Home Assistant, enabling more complex and private home automations.

    1. **Use a Local LLM**: Set up an LLM (like Qwen) locally using tools such as Ollama and OpenWeb UI.
    2. **Integrate with Home Assistant**:
    - Enable Ollama integration in Home Assistant.
    - Configure the IP and port of the LLM server.
    - Select the desired model for use within Home Assistant.
    3. **Voice Processing Tools**:
    - Use **Whisper** for speech-to-text transcription.
    - Use **Piper** for text-to-speech synthesis.
    4. **Smart Home Automation**:
    - Automate complex tasks like turning off lights and smart plugs with voice commands.
    - Use data from IP cameras (via Frigate) to control external lighting based on presence.
    5. **Hardware Recommendations**:
    - Use Home Assistant Voice Preview speaker or DIY alternatives using ESP32 or repurposed microphones.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: voice assistant + local llm

About - Propulsed by SemanticScuttle