Tags: langsmith*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. A comprehensive guide to AI observability and evaluation platforms, covering key features like prompt management, observability, and evaluations. It includes a comparison of platforms like LangSmith, Langfuse, Arize, OpenAI Evals, Google Stax, and PromptLayer, and a step-by-step guide on how to run the evaluation loop.

    Three Core Capabilities: The best AI observability/eval platforms focus on Prompt Management (versioning, parameterization, A/B testing), Observability (logging requests and traces, capturing data via APIs, SDKs, OpenTelemetry, or proxies), and Evaluations (code-based, LLM-as-judge, and human evaluations; online evals, labeling queues, error analysis).
  2. Use Callbacks to send Output Data to Posthog, Sentry, etc. LiteLLM provides input_callbacks, success_callbacks, and failure_callbacks to easily send data based on response status.
  3. 2023-10-12 Tags: , , , by klotz
  4. 2023-08-28 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "langsmith"

About - Propulsed by SemanticScuttle