Learn how to repurpose an old PC to generate AI text and images, with a focus on using Ollama with Stable Diffusion. The guide covers installation, configuration, and setting up a web UI for a more organized user interface.
This is a GitHub repository for a Discord bot named discord-llm-chatbot. This bot allows you to chat with Large Language Models (LLMs) directly in your Discord server. It supports various LLMs, including those from OpenAI API, Mistral API, Anthropic API, and local models like ollama, oobabooga, Jan, LM Studio, etc. The bot offers a reply-based chat system, customizable system prompt, and seamless threading of conversations. It also supports image and text file attachments, and streamed responses.
It all started as a joke. I was in a group chat with a few of my friends and we were talking about football (soccer for the American readers). I entered the chat during a mildly heated discussion about the manager of a team one of my friends supports. It was going on for a bit while with seemingly no end in sight...
Get models like Phi-2, Mistral, and LLaVA running locally on a Raspberry Pi with Ollama
Ellama provides several commands such as ellama-chat, ellama-ask-about, ellama-translate, ellama-summarize, and more for interacting with LLMs within Emacs.
* **GPT4All**: A desktop application that allows you to run large language models (LLMs) locally, with a simple setup and a clean chat interface.
* **LLM**: A command-line tool that enables you to download and use open-source LLMs locally, with plugins for various models, including GPT4All and Meta's Llama.
* **Ollama**: A simple, point-and-click installation process for running Llama models on your desktop, with a user-friendly interface for chatting with your own documents.
* **h2oGPT**: A desktop application that allows you to chat with your own documents using natural language and get a generative AI response, with a basic version available for download.
* **PrivateGPT**: A tool that enables you to query your own documents using natural language and get a generative AI response, with a simplified version available for non-experts.
* **Jan**: An open-source project that offers a simple interface for chatting with local models, with the ability to upload files and chat with documents (although with some limitations).
* **Opera**: A convenient, but potentially less private, way to chat with local models using the developer version of Opera, with some limitations.
* **Chat with RTX**: A simple interface for answering questions about a directory of documents using Llama 2 LLM, with some limitations and requirements (e.g., Nvidia GeForce RTX 30 Series or higher GPU).
* **llamafile**: A tool that allows developers to turn critical portions of large language models into executable files, with a simple setup process, but some limitations (e.g., currently not ideal for Windows).
* **LocalGPT**: A spinoff of PrivateGPT with more model options and detailed instructions, but with a warning that running on a CPU alone will be slow.
* **LM Studio**: A desktop app with a clean interface for running chats, but requires knowledge of model selection and has some limitations (e.g., no built-in option for running LLM over your own data).
* **LangChain**: A framework for creating end-to-end generative AI applications, requiring knowledge of LangChain basics.
* **Hugging Face**: A platform and community for artificial intelligence, offering some LLMs for local download and use.