Felix WIlliam
05/23/2025, 9:17 AMVapi Support Bot
05/23/2025, 9:19 AMcall.in-progress.error-pipeline-no-available-llm-model
means no suitable LLM was available to process your call.
To address this, try specifying a different or fallback model in your assistant configuration. For example:
json
"model": { "provider": "openai", "model": "gpt-4" },
"fallbackModels": [{ "provider": "openai", "model": "gpt-3.5-turbo" }]
If the issue persists, check your model provider's status or contact support with your call ID for further investigation.
Sources:
- [Call ended reasons (Vapi docs)](https://docs.vapi.ai/calls/call-ended-reason)
- [Assistant hooks and error handling (Vapi docs)](https://docs.vapi.ai/assistants/assistant-hooks)Praveen
05/23/2025, 12:43 PMFelix WIlliam
05/24/2025, 8:45 PMPraveen
05/24/2025, 10:57 PMconic
07/11/2025, 2:14 AMPraveen
07/14/2025, 9:29 PM