issue with custom-llm
# support
m
After switching to custom-llm assistant stopped working Error: An error occurred in this call: pipeline-error-custom-llm-llm-failed Call ID - 52b151e0-b2a1-4169-9805-4ea8a7a35e35 Please help
v
Hi @Maddy, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Call Ended Reason]() * [Get Call]() * [List Calls]() * [Create Phone Call]() -----------------------------
### Answer The error
pipeline-error-custom-llm-llm-failed
you encountered can happen for several reasons when using a custom Large Language Model (LLM) with Vapi. This error typically indicates a failure within the process of connecting to or receiving a valid response from the custom LLM. Here are a few steps you could consider to troubleshoot and resolve this issue: 1. **Check LLM Endpoint**: Ensure that the endpoint URL for your custom LLM is correct and accessible. Any issues with the endpoint could lead to this error. 2. **Authentication and API Keys**: Verify that any required API keys or authentication tokens are correctly set up within Vapi for your custom LLM. Incorrect or missing credentials can result in failed requests. 3. **LLM Compatibility**: Confirm that your custom LLM is fully compatible with OpenAI's API specifications, as Vapi supports any OpenAI-compatible endpoint [(1)]() [(2)]() . 4. **Server Configuration**: If your custom LLM is hosted on your server, ensure that the server is correctly configured to respond to requests from Vapi. The server should be able to handle requests according to the OpenAI Client specifications and support streaming completions for the best results [(2)]() . 5. **Review Custom LLM Setup**: Revisit the setup process for integrating the custom LLM with Vapi, ensuring that all steps were followed correctly. This includes posting the API key via the
/credential
endpoint if authentication is required [(1)]() [(2)]() . 6. **Check for Updates**: If your custom LLM or its underlying model
has been updated or changed recently, ensure these changes are reflected in your Vapi configuration. If the issue persists after checking these steps, it would be beneficial to review any recent changes made to the assistant or the custom LLM configuration that could have led to this error. Additionally, considering the complexity of custom LLM integration, there could be specific details or configurations unique to your setup contributing to the issue. For detailed guidance on using custom LLMs with Vapi, including setting up OpenAI-compatible endpoints and configuring your server, please refer to the official documentation on [Custom LLM]() .
m
is this step still required? Execute a PATCH request to the /assistant endpoint and ensure that model.urlRequestMetadataEnabled is set to false.
v
Glad I could be helpful. Feel free to create a new thread with any new questions you may have.
2 Views