Tags: llm* + deployment*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. The article discusses the evolution of model inference techniques from 2017 to a projected 2025, highlighting the progression from simple frameworks like Flask and FastAPI to more advanced solutions like Triton Inference Server and vLLM. It details the increasing demands on inference infrastructure driven by larger and more complex models, and the need for optimization in areas like throughput, latency, and cost.
  2. This article details the billing structure for GitHub Spark, covering costs associated with app creation (based on premium requests) and current limits for deployed apps. It also outlines future billing plans for deployed apps once limits are reached.
  3. A curated guide to code sandboxing solutions, covering technologies like MicroVMs, application kernels, language runtimes, and containerization. It provides a feature matrix, in-depth platform profiles (e2b, Daytona, microsandbox, WebContainers, Replit, Cloudflare Workers, Fly.io, Kata Containers), and a decision framework for choosing the right sandboxing solution based on security, performance, workload type, and hosting preferences.
  4. Running GenAI models is easy. Scaling them to thousands of users, not so much. This guide details avenues for scaling AI workloads from proofs of concept to production-ready deployments, covering API integration, on-prem deployment considerations, hardware requirements, and tools like vLLM and Nvidia NIMs.
  5. K8S-native cluster-wide deployment for vLLM. Provides a reference implementation for building an inference stack on top of vLLM, enabling scaling, monitoring, request routing, and KV cache offloading with easy cloud deployment.
  6. vLLM Production Stack provides a reference implementation on how to build an inference stack on top of vLLM, allowing for scalable, monitored, and performant LLM deployments using Kubernetes and Helm.
  7. This is a hands-on guide with Python example code that walks through the deployment of an ML-based search API using a simple 3-step approach. The article provides a deployment strategy applicable to most machine learning solutions, and the example code is available on GitHub.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "llm+deployment"

About - Propulsed by SemanticScuttle