Tags: oobabooga*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. The Lucid Vision Extension integrates advanced vision models into textgen-webui, enabling contextualized conversations about images and direct communication with vision models.
  2. This pull request adds StreamingLLM support for llamacpp and llamacpp_HF models, aiming to improve performance and reliability. The changes allow indefinite chatting with the model without re-evaluating the prompt.
    2024-11-26 Tags: , , , , by klotz
  3. This PR implements the StreamingLLM technique for model loaders, focusing on handling context length and optimizing chat generation speed.
    2024-11-26 Tags: , , , , , by klotz
  4. This project provides Dockerised deployment of oobabooga's text-generation-webui with pre-built images for Nvidia GPU, AMD GPU, Intel Arc, and CPU-only inference. It supports various extensions and offers easy deployment and updates.
  5. A benchmark of large language models, sorted by size (on disk) for each score. Highlighted entries are on the Pareto frontier.
    2024-09-03 Tags: , , by klotz
  6. A web search extension for Oobabooga's text-generation-webui (now with nougat) that allows for web search integration with the AI.
  7. Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vectors, now in oobabooga text generation webui!
  8. An extension for Oobabooga's Text-Generation Web UI that retrieves and adds web content to the context of prompts for more informative AI responses.
  9. An extension for oobabooga/text-generation-webui that enables the LLM to search the web using DuckDuckGo
  10. An extension that automatically unloads and reloads your model, freeing up VRAM for other programs.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "oobabooga"

About - Propulsed by SemanticScuttle