klotz: llm* + python*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. Here’s the simplest version — key sentence extraction:


    <pre>
    ```
    def extract_relevant_sentences(document, query, top_k=5):
    sentences = document.split('.')
    query_embedding = embed(query)
    scored = »
    for sentence in sentences:
    similarity = cosine_sim(query_embedding, embed(sentence))
    scored.append((sentence, similarity))
    scored.sort(key=lambda x: x 1 » , reverse=True)
    return '. '.join( s[0 » for s in scored :top_k » ])
    ```
    </pre>

    For each sentence, compute similarity to the query. Keep the top 5. Discard the rest
  2. This guide explains how to use tool calling with local LLMs, including examples with mathematical, story, Python code, and terminal functions, using llama.cpp, llama-server, and OpenAI endpoints.
  3. daggr is a Python library for building AI workflows that connect Gradio apps, ML models (through Hugging Face Inference Providers), and custom Python functions. It automatically generates a visual canvas for your workflow allowing you to inspect intermediate outputs, rerun any step any number of times, and preserves state for complex or long-running workflows.
  4. An AI-powered document search agent that explores files like a human would — scanning, reasoning, and following cross-references. Unlike traditional RAG systems that rely on pre-computed embeddings, this agent dynamically navigates documents to find answers.
  5. Minimal Claude Code alternative. Single Python file, zero dependencies, ~250 lines.
    2026-01-12 Tags: , , , , , , by klotz
  6. This document provides guidelines for maintaining high-quality Python code, specifically for AI coding agents. It covers principles, tools, style, documentation, testing, and security best practices.
  7. FailSafe is an open-source, modular framework designed to automate the verification of textual claims. It employs a multi-stage pipeline that integrates Large Language Models (LLMs) with retrieval-augmented generation (RAG) techniques.
  8. Python implementation of Recursive Language Models for processing unbounded context lengths. Process 100k+ tokens with any LLM by storing context as variables instead of prompts.
  9. This article details how to build a 100% local MCP (Model Context Protocol) client using LlamaIndex, Ollama, and LightningAI. It provides a code walkthrough and explanation of the process, including setting up an SQLite MCP server and a locally served LLM.
  10. A curated repository of AI-powered applications and agentic systems showcasing practical use cases of Large Language Models (LLMs) from providers like Google, Anthropic, OpenAI, and self-hosted open-source models.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: llm + python

About - Propulsed by SemanticScuttle