Langfuse is an open-source LLM engineering platform that offers tracing, prompt management, evaluation, datasets, metrics, and playground for debugging and improving LLM applications. It is backed by several renowned companies and has won multiple awards. Langfuse is built with security in mind, with SOC 2 Type II and ISO 27001 certifications and GDPR compliance.
Discover how to build custom LLM evaluators for specific real-world needs
Learn about the importance of evaluating classification models and how to use the confusion matrix and ROC curves to assess model performance. This post covers the basics of both methods, their components, calculations, and how to visualize the results using Python.
A ready-to-run tutorial in Python and scikit-learn to evaluate a classification model compared to a baseline model
Why evaluating LLM apps matters and how to get started