Simple, unified interface to multiple Generative AI providers, supporting various providers including OpenAI, Anthropic, Azure, Google, AWS, Groq, Mistral, HuggingFace, and Ollama. It aims to facilitate the use of multiple LLMs with a standardized interface similar to OpenAI’s.
Learn how to build an open LLM app using Hermes 2 Pro, a powerful LLM based on Meta's Llama 3 architecture. This tutorial explains how to deploy Hermes 2 Pro locally, create a function to track flight status using FlightAware API, and integrate it with the LLM.
A tutorial showing you how how to bring real-time data to LLMs through function calling, using OpenAI's latest LLM GTP-4o.
The api_base key can be used to point the OpenAI client library at a different API endpoint.
The author has also automated their weeknotes by using an Observable notebook, which generates the "releases this week" and "TILs this week" sections.
The notebook fetches TILs from the author's Datasette, grabs releases from GitHub, and assembles a markdown string for the new post.
* `llm` CLI tool for running prompts against large language models
* Automation of weeknotes using an Observable notebook
* Notebook generates "releases this week" and "TILs this week" sections
* Tool stores prompts and responses in a SQLite database