OpenAI Model Swithching
# support
g
22:16:24:292 [LOG] Model request started (attempt #1, chatgpt-4o-latest, openai) 22:16:24:428 [WARN] Model request failed (attempt #1, reason: (tools is not supported in this model. For a list of supported models, refer to https://platform.openai.com/docs/guides/function-calling#models-supporting-function-calling.)) 22:16:24:428 [LOG] Model request started (attempt #2, gpt-4o-mini-2024-07-18, openai) I have my model on 4o, for whatever reason vapi is deciding to switch randomly during the call to other versions, but still charging 4o prices, by the way. IS there any reason for it? Thanks
v
Vapi switches models during a call if the initial model (like "4o") fails to support a required feature (such as tools/function calling) or responds too slowly. In your log, the switch happened because "tools is not supported in this model," so Vapi tried another model that might support the needed feature. This is part of Vapi's standard fallback logic to maintain call continuity and low latency. Sources: - [Vapi model switching trigger conditions](https://discord.com/channels/1211482211119796234/1340268927942397993) - [Vapi OpenAI Model Switching implementation details](https://discord.com/channels/1211482211119796234/1323288623629471866)
k
Hey, Could you please schedule a brief call at your convenience so we can discuss this matter in detail? Kindly use the following link to book a suitable time: .
g
with all due respect, your next available slot is next month...
k
Hey! To help track down this issue, could you share: - The call ID