An extremely lightweight universal grammar implementation with provable recursion, based on Chomsky's Minimalist Grammar theory, fitting in under 50kB with zero runtime dependencies. It includes a probabilistic language model extension and formal verification.
Run, validate and execute GitHub Actions locally. WRKFLW is a powerful command-line tool for validating and executing GitHub Actions workflows locally, without requiring a full GitHub environment.
systemctl-tui is a fast, simple TUI for interacting with systemd services and their logs. It allows browsing service status, starting/stopping/restarting/reloading services, and viewing/editing unit files.
Rensa is a high-performance MinHash suite written in Rust with Python bindings. It's designed for efficient similarity estimation and deduplication of large datasets. It offers R-MinHash, C-MinHash, and OptDensMinHash variants, significantly faster than datasketch while maintaining comparable accuracy.
ChatDBG is an AI-based debugging assistant for C/C++/Python/Rust code that integrates large language models into a standard debugger (pdb, lldb, gdb, and windbg) to help debug your code. It can provide error diagnoses and suggest fixes.
A collection of Python examples demonstrating the use of Mistral.rs, a Rust library for working with mistral models.
Utilities for Llama.cpp, OpenAI, Anthropic, Mistral-rs. A collection of tools for interacting with various large language models. The code is written in Rust and includes functions for loading models, tokenization, prompting, text generation, and more.
Mistral.rs is a fast LLM inference platform supporting inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings. It supports the latest Llama and Phi models, as well as X-LoRA and LoRA support. The project aims to provide the fastest LLM inference platform possible.