"Prove AI is a self-hosted solution designed to accelerate GenAI performance monitoring. It allows AI engineers to capture, customize, and monitor GenAI metrics on their own terms, without vendor lock-in. Built on OpenTelemetry, Prove AI connects to existing OpenTelemetry pipelines and surfaces meaningful metrics quickly.
Key features include a unified web-based interface for consolidating performance metrics like token throughput, latency distributions, and service health. It enables faster debugging, improved time-to-metric, and better measurement of GenAI ROI. The platform is open-source, free to deploy, and offers full control over telemetry data."
A discussion post on Reddit's LocalLLaMA subreddit about logging the output of running models and monitoring performance, specifically for debugging errors, warnings, and performance analysis. The post also mentions the need for flags to output logs as flat files, GPU metrics (GPU utilization, RAM usage, TensorCore usage, etc.) for troubleshooting and analytics.