0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
LM Studio has released lms, a command-line interface (CLI) tool to load/unload models, start/stop the API server, and inspect raw LLM input. It is developed on GitHub and is MIT Licensed.
A ruby script calculates VRAM requirements for large language models (LLMs) based on model, bits per weight, and context length. It can determine required VRAM, maximum context length, or best bpw given available VRAM.
Hallux.ai is a platform offering open-source, LLM-based CLI tools for Linux and MacOS. These tools aim to streamline operations, enhance productivity, and automate workflows for professionals in production engineering, SRE, and DevOps. They also improve Root Cause Analysis (RCA) capabilities and enable self-sufficiency.
Pinboard is a command-line utility for managing file references during raw language model development. It aids in streamlining codebase workflows, offering efficient context-aware file updates.
Noema Research introduces Pinboard, a developer tool for improved productivity. Pinboard, a command-line tool, efficiently manages files and terminal references, enhancing development workflows. Key features include flexible pinning, contextual updates, clipboard integration, an interactive shell, and undo functionality.
The article discusses why Raspberry Pi is ideal for AI projects due to its powerful official AI kit, portable form factor, and active open-source community. It highlights the Raspberry Pi's processing power, the creativity it allows, and the potential for tinkerers to experiment with various AI models.
This article explains how to install Ollama, an open-source project for running large language models (LLMs) on a local machine, on Ubuntu Linux. It also covers the system requirements, installation process, and usage of various available LLMs.
Learn how to repurpose an old PC to generate AI text and images, with a focus on using Ollama with Stable Diffusion. The guide covers installation, configuration, and setting up a web UI for a more organized user interface.
picoLLM is a cross-platform, on-device inference engine optimized for running compressed large language models (LLMs) on various devices. It is compatible with Linux, macOS, Windows, Raspberry Pi OS, Android, iOS, and web browsers.
First / Previous / Next / Last
/ Page 1 of 0