Invalid assistant when using fine-tuned Open AI mo...
# support
n
Hey, I'm trying to use a fine-tuned GPT 3.5 model with VAPI. I've passed in the fine-tuned model ID to the assistant and added my Open AI API key to my VAPI account. But it's giving me an invalid assistant error. I tried setting assistant.model.urlRequestMetadataEnabled=false but that didn't work either. Any ideas why? Here's a call ID in case you want to check logs (e2b4d49b-2a47-4962-b5e3-2a661d89e57e)
v
Hi @nisalr, I'm an AI assistant for Vapi. While a team member reviews this question, I'll look through relevant sources and see if I can provide suggestions.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Get Assistant]() * [Delete Assistant]() * [Set up your Server URL]() * [Call Ended Reason]() -----------------------------
### Answer Sorry, I wasn't able to find any information about your question. For further assistance, please contact support at [Contact Us](support@vapi.ai).
Got it. Tagging @User for additional help.
s
If it is possible then can you share the exact HTTP response message that you are getting after you are trying to use it.
n
Hey @Sahil thanks for the quick response. That's a bit weird, because it worked with out of the box Open AI models (without finetuning). So basically in our set up, we don't define assistants on VAPI but let it fetch the assistant from our backend. I can see from our logs that VAPI does in fact fetch the assistant from our backend, and we return the assistant without any errors from our end. But I only hear a "invalid assistant" error over the phone.
s
It contains some sensitive information so remove it from this thread apart from that we can jump into the call and debug it together.
n
Sure. Thanks
s
Sent you a DM!
v
Hey, @nisalr did you find a fix for this?