This article describes the process of implementing function-calling in an AI system, specifically using the Mistral AI platform. The example showcases the development of an assistant that can manage a home automation system through natural language interactions with the user, including the use of available functions, function logic, and the integration of these functions into the AI system.
Leverage validation functions to prevent your LLM outputs from falling off a cliff. This article discusses how to use Python Guardrails to improve the reliability of LLM outputs by validating them using custom functions.
This article introduces Google's top AI applications, providing a guide on how to start using them, including Google Gemini, Google Cloud, TensorFlow, Experiments with Google, and AI Hub.
"The paper introduces a technique called LoReFT (Low-rank Linear Subspace ReFT). Similar to LoRA (Low Rank Adaptation), it uses low-rank approximations to intervene on hidden representations. It shows that linear subspaces contain rich semantics that can be manipulated to steer model behaviors."
This article guides you through the process of building a simple agent in LangChain using Tools and Toolkits. It explains the basics of Agents, their components, and how to build a Mathematics Agent that can perform simple mathematical operations.
Quadratic is a modern spreadsheet that combines the familiarity of a spreadsheet with the power of code, allowing you to work with data and code collaboratively in real-time. It supports popular programming languages like Python, SQL, and JavaScript, and offers features such as dynamic charts, APIs, multi-line formulas, and AI integration.
A tutorial showing you how how to bring real-time data to LLMs through function calling, using OpenAI's latest LLM GTP-4o.
This article discusses how to test small language models using 3.8B Phi-3 and 8B Llama-3 models on a PC and Raspberry Pi with LlamaCpp and ONNX. Written by Dmitrii Eliuseev.
Verba is an open-source application designed to offer an end-to-end, streamlined, and user-friendly interface for Retrieval-Augmented Generation (RAG) out of the box. It supports various RAG techniques, data types, LLM providers, and offers Docker support and a fully-customizable frontend.