This article provides a comprehensive guide on implementing the Model Context Protocol (MCP) with Ollama and Llama 3, covering practical implementation steps and use cases.
A guide to setting up local LLMs on Linux using LLaMA.cpp, llama-server, llama-swap, and QwenCode for various workflows like chat, coding, and data analysis.
A tutorial showing you how how to bring real-time data to LLMs through function calling, using OpenAI's latest LLM GTP-4o.