klotz: llm* + evaluation*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This article explores various metrics used to evaluate the performance of classification machine learning models, including precision, recall, F1-score, accuracy, and alert rate. It explains how these metrics are calculated and provides insights into their application in real-world scenarios, particularly in fraud detection.
  2. This guide demonstrates how to execute end-to-end LLM workflows for developing and productionizing LLMs at scale. It covers data preprocessing, fine-tuning, evaluation, and serving.
  3. Discusses the trends in Large Language Models (LLMs) architecture, including the rise of more GPU, more weights, more tokens, energy-efficient implementations, the role of LLM routers, and the need for better evaluation metrics, faster fine-tuning, and self-tuning.
  4. Langfuse is an open-source LLM engineering platform that offers tracing, prompt management, evaluation, datasets, metrics, and playground for debugging and improving LLM applications. It is backed by several renowned companies and has won multiple awards. Langfuse is built with security in mind, with SOC 2 Type II and ISO 27001 certifications and GDPR compliance.
  5. Discover how to build custom LLM evaluators for specific real-world needs
    2024-04-20 Tags: , by klotz
  6. Why evaluating LLM apps matters and how to get started
    2023-11-10 Tags: , , , by klotz
  7. 2023-10-13 Tags: , , by klotz
  8. 2023-07-14 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: llm + evaluation

About - Propulsed by SemanticScuttle