Tags: llama.cpp*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. llm-tool provides a command-line utility for running large language models locally. It includes scripts for pulling models from the internet, starting them, and managing them using various commands such as 'run', 'ps', 'kill', 'rm', and 'pull'. Additionally, it offers a Python script named 'querylocal.py' for querying these models. The repository also come
  2. - create a custom base image for a Cloud Workstation environment using a Dockerfile
    . Uses:

    Quantized models from
  3. The "LLM" toolkit offers a versatile command-line utility and Python library that allows users to work efficiently with large language models. Users can execute prompts directly from their terminals, store the outcomes in SQLite databases, generate embeddings, and perform various other tasks. In this extensive tutorial, topics covered include setup, usage, OpenAI models, alternative models, embeddings, plugins, model aliases, Python APIs, prompt templates, logging, related tools, CLI references, contributing, and change logs.
    2024-02-08 Tags: , , , by klotz
  4. A deep dive into model quantization with GGUF and llama.cpp and model evaluation with LlamaIndex

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "llama.cpp"

About - Propulsed by SemanticScuttle