Tags: inference engineering*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. SGLang is a fast serving framework for large language models and vision language models. It focuses on efficient serving and controllable interaction through co-designed backend runtime and frontend language.
  2. Running GenAI models is easy. Scaling them to thousands of users, not so much. This guide details avenues for scaling AI workloads from proofs of concept to production-ready deployments, covering API integration, on-prem deployment considerations, hardware requirements, and tools like vLLM and Nvidia NIMs.
  3. K8S-native cluster-wide deployment for vLLM. Provides a reference implementation for building an inference stack on top of vLLM, enabling scaling, monitoring, request routing, and KV cache offloading with easy cloud deployment.
  4. vLLM Production Stack provides a reference implementation on how to build an inference stack on top of vLLM, allowing for scalable, monitored, and performant LLM deployments using Kubernetes and Helm.
  5. A user is seeking advice on deploying a new server with 4x H100 GPUs (320GB VRAM) for on-premise AI workloads. They are considering a Kubernetes-based deployment with RKE2, Nvidia GPU Operator, and tools like vLLM, llama.cpp, and Litellm. They are also exploring the option of GPU pass-through with a hypervisor. The post details their current infrastructure and asks for potential gotchas or best practices.
  6. Arch is an intelligent gateway for agents, designed to securely handle prompts, integrate with APIs, and provide rich observability, built on Envoy Proxy.

    The ArchGW project focuses on simplifying the development of **agentic applications** – applications powered by Large Language Models (LLMs) that can perform actions and interact with tools. Here's a breakdown of the use cases and examples highlighted:

    **Core Use Cases:**

    * **Routing:** Intelligent routing of prompts to the correct agents or tools.
    * **Tools Use:** Simplifying the integration of prompts with tools/APIs for common tasks.
    * **Guardrails:** Centralized configuration for safety and preventing harmful outcomes.
    * **LLM Access:** Centralized access and management of LLMs with retries for reliability.
    * **Observability:** Providing W3C-compatible tracing and metrics for monitoring LLM interactions.

    **Specific Examples & Demos:**

    * **Weather Forecast Agent:** A sample application demonstrating core function calling capabilities.
    * **Network Operator Agent:** An agent that can interact with network devices (retrieve statistics, reboot).
    * **Connecting to SaaS APIs:** Demonstrates integrating 3rd party SaaS APIs into agentic chat experiences.
    * **LLM Router:** Using Arch as a gateway to route requests to different LLMs (GPT-4o, Mistral) based on configuration or headers. The example shows how to switch between LLMs using the `x-arch-llm-provider-hint` header.
    * **Currency Exchange Agent:** A quickstart guide builds an agent that fetches currency exchange rates from an API (Frankfurter.app). This demonstrates setting up configuration files, starting the gateway, and interacting with the agent via curl.

    **Overall, ArchGW aims to address common challenges in building agentic apps:**

    * Managing complex routing logic.
    * Integrating with various LLMs and tools.
    * Ensuring safety and reliability.
    * Providing observability into LLM interactions.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "inference engineering"

About - Propulsed by SemanticScuttle