An error occurred in this call: pipeline-error-cus...
# support
m
Started getting An error occurred in this call: pipeline-error-custom-llm-llm-failed without any changes on our side Call IDs: 2362bb32-ed33-423e-a7bc-87e787365fe6 831f5f91-6121-49ad-956f-e259a25192fe c5d3f22e-86eb-44e5-accd-c05540844b8d Kind of urgent because its broken completely. https://cdn.discordapp.com/attachments/1241140272776347668/1241140272986067106/image.png?ex=66491d74&is=6647cbf4&hm=311f63b9f290c0cbc047ce17f3cb0ea65d510fdc09b96686ae69eb827eac8f71&
v
Hi @Maddy, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Call Ended Reason]() * [Get Call]() * [List Calls]() * [Create Phone Call]() -----------------------------
### Answer The error you're encountering,
pipeline-error-custom-llm-llm-failed
, typically indicates a failure in the custom Large Language Model (LLM) integration within your Vapi setup. This could be due to various reasons such as configuration issues, connectivity problems with the LLM provider, or changes on the provider's side that are impacting the integration, even if no changes were made on your side. To resolve this issue, you might need to verify several aspects: 1. **Model Configuration**: Ensure that the model's URL and any required authentication tokens or keys are correctly configured. 2. **Provider Status**: Check if the LLM provider is experiencing any outages or updates that might affect their service. 3. **Network Issues**: Confirm that there are no network issues that could be preventing your Vapi instance from communicating with the LLM provider. Since this seems to be an urgent and potentially complex issue, directly reviewing detailed logs and configurations would be beneficial. If the problem persists, contacting Vapi's support with the specific error details and the call IDs provided could expedite the resolution. Unfortunately, the documents provided do not contain specific troubleshooting steps for this error [(1)]() .
s
"error": { "message": "Unknown parameter: 'call'.", "type": "invalid_request_error", "param": "call", "code": "unknown_parameter" }, "code": "unknown_parameter", "param": "call", "type": "invalid_request_error", "message": "400 Unknown parameter: 'call'.", "stack": "Error: 400 Unknown parameter: 'call'.\n at APIError.generate (/app/node_modules/openai/error.js:45:20)\n at OpenAI.makeStatusError (/app/node_modules/openai/core.js:263:33)\n at OpenAI.makeRequest (/app/node_modules/openai/core.js:306:30)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" }
Did you disabled the urlRequestMetadataEnabled or if not can you remove this extra parameter while sending the data.
m
humm there was no change done but looking into it now
should I pass the urlMetadataRequest because somehow its not set?
s
Yup, pass urlRequestMetadataEnabled as false
m
ok this was working without any issues, not sure how the urlMetadataRequest went missing
s
can you try it once?
m
yes working now! phew ... thanks for quick reply
not sure what happened, how that param went missing but will keep an eye when doing changes
s
@Maddy, if you could create a video tutorial related to it, it would be a great help to the community members, and we would really appreciate it.
m
sg ... will post it next week