Tags: openai*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. This tutorial provides a comprehensive guide on using Google's LangExtract library to transform unstructured text into machine-readable structured data. By leveraging OpenAI models, the guide demonstrates how to build reusable extraction pipelines for various document types such as legal contracts, meeting notes, and product announcements. The workflow includes setting up dependencies, designing precise prompts with example annotations for grounding, and implementing interactive visualizations of extracted entities.
    Key topics covered:
    - Implementing structured data extraction using LangExtract and OpenAI
    - Designing prompt templates and providing few-shot examples for entity recognition
    - Building specialized pipelines for contract risk analysis and meeting action item tracking
    - Handling long-document intelligence and batch processing workflows
    - Visualizing extracted information through HTML and organizing results into tabular datasets via Pandas
  2. This article explores the concept of an "agent harness," the essential software infrastructure that wraps around a Large Language Model (LLM) to enable autonomous, goal-directed behavior. While foundation models provide the core reasoning capabilities, the harness manages the orchestration loop, tool integration, memory, context management, state persistence, and error handling. The author breaks down the eleven critical components of a production-grade harness, drawing insights from industry leaders such as Anthropic, OpenAI, and LangChain. By comparing the harness to an operating system and the LLM to a CPU, the piece provides a technical framework for understanding how to move from simple demos to robust, production-ready AI agents.
  3. ShellGPT is a powerful command-line productivity tool driven by large language models like GPT-4. It is designed to streamline the development workflow by generating shell commands, code snippets, and documentation directly within the terminal, reducing the need for external searches. The tool supports multiple operating systems including Linux, macOS, and Windows, and is compatible with various shells such as Bash, Zsh, and PowerShell. Beyond simple queries, it offers advanced features like shell integration for automated command execution, a REPL mode for interactive chatting, and the ability to implement custom function calls. Users can also leverage local LLM backends like Ollama for a free, privacy-focused alternative to OpenAI's API.
  4. This article provides a hands-on coding guide to explore nanobot, a lightweight personal AI agent framework. It details recreating core subsystems like the agent loop, tool execution, memory persistence, skills loading, session management, subagent spawning, and cron scheduling. The tutorial uses OpenAI’s gpt-4o-mini and demonstrates building a multi-step research pipeline capable of file operations, long-term memory storage, and concurrent background tasks. The goal is to understand not just how to *use* nanobot, but how to *extend* it with custom tools and architectures.
  5. This article details a tutorial on building cybersecurity AI agents using the CAI framework. It guides readers through setting up the environment with Colab, loading API keys, and creating base agents. The tutorial progresses to advanced capabilities, including custom function tools, multi-agent handoffs, agent orchestration, input guardrails, and dynamic tools.
    It demonstrates how CAI transforms Python functions and agent definitions into flexible cybersecurity workflows capable of reasoning, delegating, validating, and responding in a structured way. The article also showcases CTF-style pipelines, multi-turn context handling, and streaming responses, offering a comprehensive overview of CAI's potential for security applications.
    2026-03-31 Tags: , , , , , by klotz
  6. OpenAI has expanded its Responses API to facilitate the development of agentic workflows. This includes support for a shell tool, an agent execution loop, a hosted container workspace, context compaction, and reusable agent skills. The new features aim to offload the complexities of building execution environments from developers, providing a managed infrastructure for handling tasks like file management, prompt optimization, secure network access, and handling timeouts.
    A core component is the agent execution loop, where the model proposes actions (running commands, querying data) that are executed in a controlled environment, with the results fed back to refine the process. Skills allow for the creation of reusable task patterns.
  7. Salute is a JavaScript library designed for controlling Large Language Models (LLMs) with a React-like, declarative approach. It emphasizes composability, minimal abstraction, and transparency – ensuring you see exactly what prompts are being sent to the LLM. Salute offers low-level control and supports features like type-checking, linting, and auto-completion for a smoother development experience. The library's design allows for easy creation of chat sequences, nesting of components, and dynamic prompt generation. It's compatible with OpenAI models but is intended to support any LLM in the future.
    2026-03-20 Tags: , , , , , by klotz
  8. This article presents findings from a survey of over 900 software engineers regarding their use of AI tools. Key findings include the dominance of Claude Code, the mainstream adoption of AI in software engineering (95% weekly usage), the increasing use of AI agents (especially among staff+ engineers), and the influence of company size on tool choice. The survey also reveals which tools engineers love, with Claude Code being particularly favored, and provides demographic information about the respondents. A longer, 35-page report with additional details is available for full subscribers.
  9. This article details how to use Ollama to run large language models locally, protecting sensitive data by keeping it on your machine. It covers installation, usage with Python, LangChain, and LangGraph, and provides a practical example with FinanceGPT, while also discussing the tradeoffs of using local LLMs.
  10. Anthropic is clashing with the Pentagon over the military's use of its AI systems, particularly regarding autonomous weaponry and mass surveillance. A key point of contention arose when the Pentagon asked if Claude could be used to help intercept a nuclear missile, a request Anthropic resisted, raising concerns about unrestricted AI use and potential risks. OpenAI is also signaling it would take a similar stance.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "openai"

About - Propulsed by SemanticScuttle