Add LLM eval to Vapi
# support
b
In order to know exactly what happens for each execution, we must add llm evaluation tools like langsmith, langfuse, opik, etc… having the possibility to create AI testers is great but does not give the possibility to identify each errors, run experiments, create a CI/CD pipeline and make sure issues are actually fixed before deploying for the client.
v
Thanks for your interest in adding LLM evaluation tools to Vapi! I can see you're looking to implement tools like Langsmith, Langfuse, and Opik to create a more robust testing and deployment pipeline. I found that Vapi already has native integration with Langfuse for enhanced telemetry monitoring. To set this up, you would: 1. Get your Langfuse credentials (Secret Key, Public Key, and Host URL) 2. Add these to the [Provider Credentials page](https://dashboard.vapi.ai/keys) under Observability Providers 3. Your traces should then appear in your Langfuse dashboard For the other tools you mentioned, could you share more about your specific requirements for Langsmith and Opik integration? Also, what kind of CI/CD pipeline are you envisioning - would you be using GitHub Actions, Jenkins, or another platform? Sources: - [Langfuse Integration Documentation](https://docs.vapi.ai/providers/observability/langfuse) - [Provider Credentials Page](https://dashboard.vapi.ai/keys)
p
hey Benjamin,You can raise an feature request for it over here: https://roadmap.vapi.ai