Voice generation not working for custom LLM option
# support
Hi, when creating a pipeline with a custom LLM voice generation didn't work. To confirm this, I ran the server side Vercel example and created an example assistant from the custom_llm.json - same result. When I start a conversation with the new assistant from the dashboard, there is no response generated.
You need to send response back as SSE
You can create a script that uses the OpenAI SDK to call your custom LLM with stream mode enabled
If that works, it'll work with us