klotz: text-generation-webui* + llm*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This project provides Dockerised deployment of oobabooga's text-generation-webui with pre-built images for Nvidia GPU, AMD GPU, Intel Arc, and CPU-only inference. It supports various extensions and offers easy deployment and updates.
  2. An extension for oobabooga/text-generation-webui that enables the LLM to search the web using DuckDuckGo
  3. An extension that automatically unloads and reloads your model, freeing up VRAM for other programs.
  4. hat - chat directly, character card is your prompt

    instruct- chat between "you" and "assistant" using the model's prompt format

    chat-instruct- chat with you and a character card as a prompt but with the instruct template applied. .i.e "you are an AI playing x character, respond as the character would" converted to alpaca, wizard or whatever

    There is no best, but for factual information, you probably want to keep to instruct mode. instruct-chat doesn't necessarily play the characters better or make them write longer. It's sort of hit or miss. one may work better than the other for a particular model and prompt.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: text-generation-webui + llm

About - Propulsed by SemanticScuttle