continuous pipeline-error-openai-llm-failed
# support
b
Good day! We've been receiving following error when the call gets connected to our customers: An error occurred in this call: pipeline-error-openai-llm-failed. However, the openai api key is active and has enough credits. We've set up the assistant to use gpt-4o cluster. However, we also keep getting a warning: Model request started (attempt #1, gpt-4o, openai) 07:40:20:734[WARN] Model request taking too long (attempt #1, started 864.73ms ago) Model request started (attempt #1, chatgpt-4o-latest, openai) Model request failed (attempt #1, reason: (tools is not supported in this model. For a list of supported models, refer to https://platform.openai.com/docs/guides/function-calling#models-supporting-function-calling.)) Here are the call ids. Please check: 7ecc7315-7a1c-4f22-bf61-73e34ce3b0d7 5c53f2a7-7659-4d7d-84d1-234c17efc937 84f9bf43-77cc-4509-b275-609ae9fbe337 6c3c3e6f-4a89-4037-ab4b-7243d77faf0a 3da65e29-5363-4015-8c96-dff92bcda2f7 55946da8-847b-484d-b9d6-529f93554505 baacaf81-9044-4fb1-bd54-7dad3d3613b4
v
The error
pipeline-error-openai-llm-failed
and the warning about "tools is not supported in this model" suggest that your current model (e.g.,
gpt-4o
) does not support function calling/tools.
To resolve this, update your assistant to use a model that supports tools/function calling, such as
gpt-4-1106-preview
. You can find the list of supported models in the [OpenAI documentation](https://platform.openai.com/docs/guides/function-calling#models-supporting-function-calling). Example PATCH request:
Copy code
json
{
  "model": {
    "provider": "openai",
    "model": "gpt-4-1106-preview"
  }
}
This should prevent the error and allow tool use in your calls. Sources: - [Call ended reasons (Vapi documentation)](https://docs.vapi.ai/calls/call-ended-reason) - [Custom LLM Tool Calling Integration (Vapi documentation)](https://docs.vapi.ai/customization/tool-calling-integration)
b
gpt-4-1106-preview is an old model that doesnt suit our needs. We need gpt-4o and we are aware that function calling is supported for the model
a
@User I'm having the same issue, using gpt-4o
v
Hey, we’re aware of the issue and it’s the first thing we’ll work on when the team gets online. I’m really sorry for the trouble this is causing you. In the meantime, could you please remove your OpenAI key? That should help resolve the issue for now.
r
+1 we are facing the issue as well
v
Hey @Batyr, the issue has been resolved. Can you try 2-3 test calls and let me know how it goes for you?
b
@Shubham Bajaj Hey. All good now. Thank you!
v
Great to hear your issue is resolved! If you encounter the
pipeline-error-openai-llm-failed
again, check your assistant's model configuration and ensure the model supports all required features. For persistent or unexplained errors, contact [Vapi support](https://docs.vapi.ai/support) with your call IDs for direct assistance. Sources: - [Call ended reasons (Vapi documentation)](https://docs.vapi.ai/calls/call-ended-reason) - [Custom LLM Tool Calling Integration (Vapi documentation)](https://docs.vapi.ai/customization/tool-calling-integration) - [Vapi Support](https://docs.vapi.ai/support)
v
Glad the issue was resolved. Please reach out to us if you have any other questions regarding this issue
2 Views