Tags: deployment*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. The article discusses the evolution of model inference techniques from 2017 to a projected 2025, highlighting the progression from simple frameworks like Flask and FastAPI to more advanced solutions like Triton Inference Server and vLLM. It details the increasing demands on inference infrastructure driven by larger and more complex models, and the need for optimization in areas like throughput, latency, and cost.
  2. This article details the billing structure for GitHub Spark, covering costs associated with app creation (based on premium requests) and current limits for deployed apps. It also outlines future billing plans for deployed apps once limits are reached.
  3. A curated guide to code sandboxing solutions, covering technologies like MicroVMs, application kernels, language runtimes, and containerization. It provides a feature matrix, in-depth platform profiles (e2b, Daytona, microsandbox, WebContainers, Replit, Cloudflare Workers, Fly.io, Kata Containers), and a decision framework for choosing the right sandboxing solution based on security, performance, workload type, and hosting preferences.
  4. Running GenAI models is easy. Scaling them to thousands of users, not so much. This guide details avenues for scaling AI workloads from proofs of concept to production-ready deployments, covering API integration, on-prem deployment considerations, hardware requirements, and tools like vLLM and Nvidia NIMs.
  5. K8S-native cluster-wide deployment for vLLM. Provides a reference implementation for building an inference stack on top of vLLM, enabling scaling, monitoring, request routing, and KV cache offloading with easy cloud deployment.
  6. vLLM Production Stack provides a reference implementation on how to build an inference stack on top of vLLM, allowing for scalable, monitored, and performant LLM deployments using Kubernetes and Helm.
  7. A user is seeking advice on deploying a new server with 4x H100 GPUs (320GB VRAM) for on-premise AI workloads. They are considering a Kubernetes-based deployment with RKE2, Nvidia GPU Operator, and tools like vLLM, llama.cpp, and Litellm. They are also exploring the option of GPU pass-through with a hypervisor. The post details their current infrastructure and asks for potential gotchas or best practices.
  8. Flagger is a progressive delivery Kubernetes operator that automates the release process for applications running on Kubernetes. It reduces the risk of introducing a new software version in production by gradually shifting traffic to the new version while measuring metrics and running conformance tests.
  9. This is a hands-on guide with Python example code that walks through the deployment of an ML-based search API using a simple 3-step approach. The article provides a deployment strategy applicable to most machine learning solutions, and the example code is available on GitHub.
  10. In this article, we explore how to deploy and manage machine learning models using Google Kubernetes Engine (GKE), Google AI Platform, and TensorFlow Serving. We will cover the steps to create a machine learning model and deploy it on a Kubernetes cluster for inference.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "deployment"

About - Propulsed by SemanticScuttle