Maddy
05/02/2024, 4:02 PMVapi Support
05/02/2024, 4:02 PMVapi Support
05/02/2024, 4:02 PMpipeline-error-custom-llm-llm-failed
you encountered can happen for several reasons when using a custom Large Language Model (LLM) with Vapi. This error typically indicates a failure within the process of connecting to or receiving a valid response from the custom LLM. Here are a few steps you could consider to troubleshoot and resolve this issue:
1. **Check LLM Endpoint**: Ensure that the endpoint URL for your custom LLM is correct and accessible. Any issues with the endpoint could lead to this error.
2. **Authentication and API Keys**: Verify that any required API keys or authentication tokens are correctly set up within Vapi for your custom LLM. Incorrect or missing credentials can result in failed requests.
3. **LLM Compatibility**: Confirm that your custom LLM is fully compatible with OpenAI's API specifications, as Vapi supports any OpenAI-compatible endpoint [(1)]() [(2)]() .
4. **Server Configuration**: If your custom LLM is hosted on your server, ensure that the server is correctly configured to respond to requests from Vapi. The server should be able to handle requests according to the OpenAI Client specifications and support streaming completions for the best results [(2)]() .
5. **Review Custom LLM Setup**: Revisit the setup process for integrating the custom LLM with Vapi, ensuring that all steps were followed correctly. This includes posting the API key via the /credential
endpoint if authentication is required [(1)]() [(2)]() .
6. **Check for Updates**: If your custom LLM or its underlying modelVapi Support
05/02/2024, 4:02 PMMaddy
05/02/2024, 4:13 PMVapi Support
05/02/2024, 4:20 PM