@hkmalladi
Hello! To use a model deployed on your Azure OpenAI account with VAPI, you'll need to ensure that you've set up everything correctly. Here's a step-by-step guide to help you:
1. **Set Credential in Provider Keys**: Make sure you've entered the correct credentials in the provider keys section of VAPI. This should include your Azure OpenAI key.
2. **Custom LLM Configuration**: When configuring a Custom LLM (Large Language Model), you need to provide the endpoint URL. Ensure that the endpoint you've provided is correct and that it's the one provided by Azure for your deployed model.
3. **Endpoint Compatibility**: Your endpoint should be compatible with the OpenAI Client and support streaming completions for the best results.
4. **Model Configuration**: If you're trying to update the model configuration and facing issues, you might need to execute a PATCH request to the
/assistant
endpoint with the correct parameters.
If you've tried these steps and are still facing issues, it could be due to a specific configuration requirement or an error that needs further investigation. In that case, please reach out to support for more detailed assistance. If you need to update the model configuration as per the context provided, I can help you with that. Would you like to proceed with updating the model configuration?