No response from agent
# support
p
According to your documentation, a non-streaming OpenAI-compatible response should work for TTS with a Custom LLM, even if Vapi sends stream: true in its request to my webhook. My webhook is providing such a response (see attached request from Vapi and response from my server), but TTS for this response is not occurring, although the initial 'First Message' TTS works. Can you please advise why this might be happening or if there's a known issue or a more specific requirement for the non-streaming response to trigger TTS in this scenario? I'm new to Vapi and would love to understand best practices around streaming or not, which is best for users/cost, etc. Thank you guys! https://cdn.discordapp.com/attachments/1375297555461701684/1375297556015484938/message.txt?ex=68312d29&is=682fdba9&hm=b985a0730259b8aa918cd46744f9412d6950d72d8ec7cc45456a78e0d287a351&
@jordan @Shubham Bajaj
k
Hey, we only support streaming for all of our service providers (be it LLM, TTS, STT). In your case, you have the wrong information. Custom LLM only supports streaming based openAI-compatible responses. If you send the response in the wrong stream mode, or which are not compatible with OpenAI, they will eventually fail. Let me know if you require further help or have more questions.
p
hey @Shubham Bajaj what do you mean we have the wrong information? would you help me when you have a moment
k
By wrong information, I meant to say you are wrong on the part that streaming is not required. Instead, streaming is necessary for custom llm, which you can check it from below documentation. In case you don't want to stream your responses, you have to send them in one go only. There is no alternative to it. You can check in depth about how custom llm and its integration works with us and let me know if you have more questions related to this.