Tags: inference* + production engineering*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. Prove AI is developing an observability-first foundation designed for production generative AI systems. Their mission is to enable engineering teams to understand, diagnose, and remediate failures within complex AI pipelines, including LLM inference, retrieval processes, and agent orchestration.
    The current release, v0.1, provides an opinionated observability pipeline specifically for generative AI workloads through:
    - A containerized, OpenTelemetry-based telemetry pipeline.
    - Preconfigured collection of traces, metrics, and logs tailored for AI systems.
    - Instrumentation patterns for RAG pipelines, embeddings, LLM inference, and agent-based systems.
    - Compatibility with standard backends like Prometheus.
  2. Running GenAI models is easy. Scaling them to thousands of users, not so much. This guide details avenues for scaling AI workloads from proofs of concept to production-ready deployments, covering API integration, on-prem deployment considerations, hardware requirements, and tools like vLLM and Nvidia NIMs.
  3. K8S-native cluster-wide deployment for vLLM. Provides a reference implementation for building an inference stack on top of vLLM, enabling scaling, monitoring, request routing, and KV cache offloading with easy cloud deployment.
  4. vLLM Production Stack provides a reference implementation on how to build an inference stack on top of vLLM, allowing for scalable, monitored, and performant LLM deployments using Kubernetes and Helm.
  5. Explore the best LLM inference engines and servers available to deploy and serve LLMs in production, including vLLM, TensorRT-LLM, Triton Inference Server, RayLLM with RayServe, and HuggingFace Text Generation Inference.
    2024-06-21 Tags: , , by klotz
  6. In this article, we explore how to deploy and manage machine learning models using Google Kubernetes Engine (GKE), Google AI Platform, and TensorFlow Serving. We will cover the steps to create a machine learning model and deploy it on a Kubernetes cluster for inference.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "inference+production engineering"

About - Propulsed by SemanticScuttle