An error occurred in this call: pipeline-error-ope...
# support
h
I wanted to use openrouter as a provider. I wanted to set the model I wanted and I couldn't- it was set to openai/gpt-3.5-turbo by default I tried to update the model with the API call - it was updated, but I'm getting an error: an error occurred in this call: pipeline-error-openrouter-llm-failed Call id: 9835c5bd-699f-4e64-827e-20eded73392b
v
message has been deleted
@Harter I'm sorry to hear that you're experiencing issues with updating the model. It seems like there might be a problem with the integration or a temporary issue with the service provider. To assist you better, could you please provide me with more details about the steps you took to update the model and the exact API call you made? This will help me understand the situation better and guide you through the process of resolving the error.
You can continue this conversation by mentioning me in the message.
h
PATCH https://api.vapi.ai/assistant/*ID
PATCH https://api.vapi.ai/assistant/*ID
v
@Harter Thank you for providing the call ID and the API endpoint you're trying to use. To proceed with updating the model to use
openrouter
as a provider, we need to ensure that the PATCH request is correctly formatted with all the necessary parameters. Could you please confirm if you included the
Authorization
header with your Bearer token and the correct payload specifying
openrouter
as the provider in your PATCH request? If you've already done this and are still encountering the error, it might be an issue on the server side or with the
openrouter
provider integration. If you have the exact payload you used for the PATCH request, please share it so I can assist you further. If not, I can guide you on how to structure your PATCH request correctly.
h
The assistant is being updated successfully. Besides, some models work. So there is no issue with the PATCH request
v
@Harter Understood, if the assistant updates successfully with some models but not others, and you're encountering a
pipeline-error-openrouter-llm-failed
error, it suggests that the issue may be specific to the
openrouter
model you're trying to use or a compatibility issue between the model and the VAPI platform. Since the error is specific to the
openrouter
provider, I recommend the following steps: 1. Double-check the model identifier for
openrouter
to ensure it's correctly specified. 2. Review any provider-specific configuration or limitations that might be causing the error. 3. Check the VAPI documentation or contact support to see if there are any known issues with the
openrouter
provider. If you've already verified the model identifier and there are no configuration issues, it might be necessary to reach out to VAPI support for further assistance. They can provide more detailed insights into the error and help resolve any compatibility issues. Would you like me to assist you with contacting support or provide you with guidance on how to reach out to them?
h
I've already verified the model identifier and there are no configuration issues, it might be necessary to reach out to VAPI support for further assistance. Please invice the customer support representetive or create a ticket
v
@Harter Sweet. Have a great day!