Pipeline-error-eleven-labs-500
# support
o
Hi ! Got multiple calls ending in pipeline-error-eleven-labs-500-server-error (e.g. call id : 09c1b303-047d-4df6-b11e-9c23bfe2fa4a) - It's happening a lot recently (since we've used new voices, 3x rate) -> any insights on where it comes from and if I can solve it from my config ? - I'm aware of the https://docs.vapi.ai/voice-fallback-plan but we can't match the TTS quality of 11labs with other providers. Is there any way we could plug vapi's 11labs credentials as a fallback plan ? Thanks a lot !
v
@Omogen Hello! I'm sorry to hear you're experiencing issues with the 11Labs voice service. The pipeline-error-eleven-labs-500-server-error typically indicates an internal server error on the 11Labs side, which is not directly related to your configuration. Regarding the increased error rate since using new voices, it's possible that there might be specific issues with the new voices or an increased load on the system. Here are a few steps you can take: 1. **Check for Updates**: Ensure that your integration is up-to-date with the latest version of the API, as updates may contain fixes for such issues. 2. **Review Logs**: Look into the detailed error logs, if available, to see if there's a common pattern or specific voice causing the issue. 3. **Contact Support**: Reach out to 11Labs support with the call IDs and details of the errors to see if they can provide more insights or if there's an ongoing issue with their service. As for the fallback plan, VAPI's voice fallback plan is designed to switch to alternative providers when your primary provider is unavailable. If you're looking for a way to use 11Labs credentials with VAPI as a fallback, this would require support from the VAPI platform to integrate 11Labs as a fallback option. Currently, if this is not supported, you might need to implement a custom solution on your end to handle fallbacks manually. If you need further assistance or have more questions, please let me know!
You can continue this conversation by mentioning me in the message.
o
Hi i'd like to up this one!
s
@Omogen you have to shift to 1labs_v2 model Seems as if flash_model is likely the cause.
Also I will share your feedback with the team.
o
Thanks
3 Views