slickVik
04/07/2025, 6:33 PMVapi Support Bot
04/07/2025, 6:35 PMcall.in-progress.error-vapifault-custom-llm-llm-failed
error across multiple calls.
While I can't debug those specific call IDs without more information, I can definitely help explain what this error means and provide some general debugging steps that might help resolve the issue.
## What this error means
The error call.in-progress.error-vapifault-custom-llm-llm-failed
indicates a failure related to your custom LLM integration with Vapi. According to [information from Discord](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1335181348285054996), this error may be related to:
- LLM failures
- Issues with custom LLM usage
- Token limits being exceeded
- Errors from services like 11Labs
## Possible causes
There are several potential causes for this error:
1. **LLM Failures**: Service outages, network issues, or problems with specific requests being made to the LLM.
2. **Custom LLM Configuration Issues**: Your LLM key might not be correctly configured or might have exceeded usage limits set by the LLM provider.
3. **Token Limits**: The error can be triggered if token limits are being exceeded or if there's incorrect token calculation.
4. **Incompatible Response Format**: If your custom LLM response doesn't match the expected OpenAI-compatible format, particularly if it's missing the "choices" array property or if the property is not an array.
5. **Tool Use and Result Formatting**: Incorrectly formatted or sequenced tool use and tool result blocks.
6. **Model-Specific Issues**: If you're using 11labs flash_model, [one Discord thread](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1335277468805758986) suggests changing to the turbo model might help.
## Debugging steps
Here are some steps you can take to debug this issue:
1. **Review configurations and logs**:
- Check your server logs for more detailed error messagesVapi Support Bot
04/07/2025, 6:35 PMtransferCall
tool is correctly defined and triggered
4. **Consider LLM provider specifics**:
- If using 11labs, try changing from flash_model to turbo
- If using Azure OpenAI, check for interference between Azure OpenAI deployment parameters and VAPI assistant settings
Is there any specific part of your custom LLM integration that you'd like me to provide more information about?
Sources:
- [Discord Thread about error meaning](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1335181348285054996)
- [Discord Thread about 11labs model issue](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1335277468805758986)
- [Discord Thread about token limits](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1323288623629471866)
- [Discord Thread about tool formatting](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1331340140064407573)slickVik
04/07/2025, 6:36 PMShubham Bajaj
04/07/2025, 11:13 PMShubham Bajaj
04/07/2025, 11:15 PMslickVik
04/07/2025, 11:21 PMslickVik
04/07/2025, 11:21 PMslickVik
04/07/2025, 11:25 PMslickVik
04/07/2025, 11:25 PMSahil
04/08/2025, 11:06 PMaxelmylle
05/08/2025, 7:18 PMSahil
05/10/2025, 3:55 AM