call.in-progress.error-pipeline-no-available-llm-m...
# support
f
I noticed that are problems with you lastest "ChatGPT 4o(latest) Cluster" "An error occurred in this call: call.in-progress.error-pipeline-no-available-llm-model" you can verify at call id: 626d286b-c93a-42c5-9e87-90dfbebeefa3
v
The error
call.in-progress.error-pipeline-no-available-llm-model
means no suitable LLM was available to process your call.
To address this, try specifying a different or fallback model in your assistant configuration. For example:
Copy code
json
"model": { "provider": "openai", "model": "gpt-4" },
"fallbackModels": [{ "provider": "openai", "model": "gpt-3.5-turbo" }]
If the issue persists, check your model provider's status or contact support with your call ID for further investigation. Sources: - [Call ended reasons (Vapi docs)](https://docs.vapi.ai/calls/call-ended-reason) - [Assistant hooks and error handling (Vapi docs)](https://docs.vapi.ai/assistants/assistant-hooks)
p
It is fixed, can you try again?
f
I recently found out that GPT 4.0 latest doesn't support Tools Calling. So I'm no longer interested in using it given this limitation. Thank you very much for your help. I didn't initially realize this limitation.
p
sg.
c
I just ran into this error as well. What is the ideal model for basic workflow in vapi?
p
@conic This ticket has been marked as solved, and as a result, you will no longer receive any further responses. Kindly create a new support ticket to escalate your inquiry.