Is the GPT model being swapped during the call?
# support
h
I use the OpenAI GPT-4o model for the assistant. With no changes to the settings or the prompt, sometimes it works flawlessly, and sometimes it performs poorly. I’ve been blaming the prompt and trying to adjust it, but now I’m starting to think something might be changing on VAPI’s side. Is there any setting I can use to ensure the GPT model doesn’t switch during the call?
m
check the call logs and it should say in there if it switches the model or not. I had deepseek set in there and my request went above the max payload so they switched to a fallback model. It's possible...
k
Hey Harter, To help track down this issue, could you share: - The call ID - When exactly this happened (the timestamp) - What response you expected to get - What response you actually got instead This would really help us figure out what went wrong!