Tags: local llm* + whisper*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. This article details how to run a 120B parameter LLM locally with 24GB of VRAM and 64GB of system RAM, using a setup with Proxmox LXCs, Whisper for voice transcription, and integration with Home Assistant for smart home automation.
  2. This article details how to set up a custom voice pipeline in Home Assistant using free self-hosted tools like Whisper and Piper, replacing cloud-based services for full control over speech-to-text and text-to-speech processing.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "local llm+whisper"

About - Propulsed by SemanticScuttle