This project provides Dockerised deployment of oobabooga's text-generation-webui with pre-built images for Nvidia GPU, AMD GPU, Intel Arc, and CPU-only inference. It supports various extensions and offers easy deployment and updates.
An extension that automatically unloads and reloads your model, freeing up VRAM for other programs.
Training PRO extension for oobabooga WebUI - recent dev version. Key features and changes from the main Training in WebUI include:
- Chunking: precise raw text slicer (PRTS) uses sentence splitting and making sure things are clean on all ends
- Overlapping chunking: this special overlapping will make additional overlap block based on logical rules
- Custom scheduler: FP_low_epoch_annealing keeps the LR constant for the first epoch and uses cosine for the rest
- Target selector: Normal LORA is q, v, and it should be used with (q k v o) or (q k v)
- DEMENTOR LEARNING (experimental) is an experimental chunking to train long-form text in low numbers of epochs