klotz: llm*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This paper proposes the Knowledge Graph of Thoughts (KGoT) architecture for AI assistants, integrating LLM reasoning with dynamically constructed knowledge graphs to reduce costs and improve performance on complex tasks like the GAIA benchmark.

  2. This article details the development and implementation of an MCP (Multi-Modal Conversation Protocol) for scheduling social media posts within the Postiz open-source social media scheduling tool. It discusses the challenges of using SSE for transport and the benefits of WebSockets, as well as techniques for forcing LLMs to execute necessary configuration steps before scheduling. It highlights the use of decorators for creating API endpoints and the potential for integrating Postiz with other tools like Cursor and Notion.

    2025-04-21 Tags: , , , , by klotz
  3. This article provides a hands-on guide to Anthropic’s Model Context Protocol (MCP), an open protocol designed to standardize connections between AI systems and data sources. It covers how to set up and use MCP with Claude Desktop and Open WebUI, along with potential challenges and future developments.

  4. A Reddit thread discussing preferred local Large Language Model (LLM) setups for tasks like summarizing text, coding, and general use. Users share their model choices (Gemma, Qwen, Phi, etc.) and frameworks (llama.cpp, Ollama, EXUI) along with potential issues and configurations.

    Model Use Cases Size (Parameters) Approx. VRAM (Q4 Quantization) Approx. RAM (Q4) Notes/Requirements
    Gemma 3 (Meta) Summarization, conversational tasks, image recognition, translation, simple writing 3B, 4B, 7B, 8B, 12B, 27B+ 2-4GB (3B), 4-6GB (7B), 8-12GB (12B) 4-8GB (3B), 8-12GB (7B), 16-24GB (12B) Excellent performance for its size. Recent versions have had memory leak issues (see Reddit post – use Ollama 0.6.6 or later, but even that may not be fully fixed). QAT versions are highly recommended.
    Qwen 2.5 (Alibaba) Summarization, coding, reasoning, decision-making, technical material processing 3.5B, 7B, 72B 2-3GB (3.5B), 4-6GB (7B), 26-30GB (72B) 4-6GB (3.5B), 8-12GB (7B), 50-60GB (72B) Qwen models are known for strong performance. Coder versions specifically tuned for code generation.
    Qwen3 (Alibaba - upcoming) General purpose, likely similar to Qwen 2.5 with improvements 70B Estimated 25-30GB (Q4) 50-60GB Expected to be a strong competitor.
    Llama 3 (Meta) General purpose, conversation, writing, coding, reasoning 8B, 13B, 70B+ 4-6GB (8B), 7-9GB (13B), 25-30GB (70B) 8-12GB (8B), 14-18GB (13B), 50-60GB (70B) Current state-of-the-art open-source model. Excellent balance of performance and size.
    YiXin (01.AI) Reasoning, brainstorming 72B ~26-30GB (Q4) ~50-60GB A powerful model focused on reasoning and understanding. Similar VRAM requirements to Qwen 72B.
    Phi-4 (Microsoft) General purpose, writing, coding 14B ~7-9GB (Q4) 14-18GB Smaller model, good for resource-constrained environments, but may not match larger models in complexity.
    Ling-Lite RAG (Retrieval-Augmented Generation), fast processing, text extraction Variable Varies with size Varies with size MoE (Mixture of Experts) model known for speed. Good for RAG applications where quick responses are important.

    Key Considerations:

    • Quantization: The VRAM and RAM estimates above are based on 4-bit quantization (Q4). Lower quantization (e.g., Q2) will reduce memory usage further, but may impact quality. Higher quantization (e.g., Q8, FP16) will increase quality but require significantly more memory.
    • Frameworks: Popular frameworks for running these models locally include:
      • llama.cpp: Highly optimized for CPU and GPU, especially on Apple Silicon.
      • Ollama: Simplified setup and management of LLMs. (Be aware of the Gemma 3 memory leak issue!)
      • Text Generation WebUI (oobabooga): Web-based interface with many features and customization options.
    • Hardware: A dedicated GPU with sufficient VRAM is highly recommended for decent performance. CPU-only inference is possible but can be slow. More RAM is generally better, even if the model fits in VRAM.
    • Context Length: The "40k" context mentioned in the Reddit post refers to the maximum number of tokens (words or sub-words) the model can process at once. Longer context lengths require more memory.
  5. This tutorial details how to use FastAPI-MCP to convert a FastAPI endpoint (fetching US National Park alerts) into an MCP-compatible server. It covers environment setup, app creation, testing, and MCP server implementation with Cursor IDE.

    2025-04-20 Tags: , , , , , by klotz
  6. Researchers from AWS and Intuit have designed a zero-trust security framework for the Model Context Protocol (MCP), addressing threats like tool poisoning and unauthorized access through multi-layered defenses including Just-in-Time access control and behavior-based monitoring.

  7. This article details the author's insights into AI function calling, its challenges, and the Agentica framework developed to address them, emphasizing the importance of JSON schema understanding, compiler support, and a document-driven approach.

  8. This article details a comparison between Model Context Protocol (MCP) and Function Calling, two methods for integrating Large Language Models (LLMs) with external systems. It covers their architectures, security models, scalability, and suitable use cases, highlighting the strengths and weaknesses of each approach.

    MCP is best suited for robust, complex applications within secure enterprise environments, while Function Calling excels in straightforward, dynamic task execution scenarios. The choice depends on the specific needs, security requirements, scalability needs, and resource availability of the project.

    2025-04-19 Tags: , , , , by klotz
  9. MCP Server Key Functionality Link to Repository
    Filesystem Read/Write/Manage files & directories https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem
    Google Drive Search & Access files in Google Drive https://github.com/modelcontextprotocol/servers/tree/main/src/gdrive
    Slack Interact with Slack workspaces (messages, channels, users) https://github.com/modelcontextprotocol/servers/tree/main/src/slack
    Memory Store & Retrieve contextual data (entities, relations) https://github.com/modelcontextprotocol/servers/tree/main/src/memory
    Spotify Control Spotify playback & access content https://github.com/varunneal/spotify-mcp
    Notion Manage to-do lists in Notion https://github.com/danhilse/notion_mcp
    Email Send & Manage emails with attachments https://github.com/Shy2593666979/mcp-server-email
    Windows Control Programmatically control Windows system operations https://github.com/Cheffromspace/MCPContro
    Excel Manipulate Excel files (create, read, write, format) https://github.com/haris-musa/excel-mcp-server
    Fetch Retrieve & convert web page content to markdown https://github.com/modelcontextprotocol/servers/tree/main/src/fetch
    2025-04-19 Tags: , by klotz
  10. This document details how to run Gemma models, covering framework selection, variant choice, and running generation/inference requests. It emphasizes considering available hardware resources and provides recommendations for beginners.

    2025-04-18 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: llm

About - Propulsed by SemanticScuttle