mukul
04/09/2025, 2:38 PMVapi Support Bot
04/09/2025, 2:40 PMtriggerResponseEnabled
flag - This flag [controls whether a system message triggers an LLM response](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1338730919904673853), which might help you manage the response behavior.
4. Ensure correct response format - Make sure your custom LLM is returning responses in the expected format. Vapi expects each chunk to be in a specific delta format like:
{
choices: [
{
delta: {
content?: string,
tool_calls?: Array<...>,
},
finish_reason?: string
}
]
}
Would you like me to provide more details on any of these approaches or help troubleshoot further?
Sources:Vapi Support Bot
04/09/2025, 2:40 PMmukul
04/10/2025, 5:21 AMDomnulZeu
04/10/2025, 2:05 PMDomnulZeu
04/10/2025, 2:05 PMKyle Brunker
04/11/2025, 7:29 AMKyle Brunker
04/11/2025, 7:29 AMmukul
04/11/2025, 9:08 AMKyle Brunker
04/11/2025, 9:45 PM